[SEQ RERUN] “I don’t know.”
Today’s post, “I don’t know.” was originally published on 21 December 2006. A summary (taken from the LW wiki):
An edited instant messaging conversation regarding the use of the phrase “I don’t know”. “I don’t know” is a useful phrase if you want to avoid getting in trouble or convey the fact that you don’t have access to privileged information.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Modesty Argument, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
- [SEQ RERUN] A Fable of Science and Politics by 26 Apr 2011 3:42 UTC; 15 points) (
- 3 Nov 2011 14:49 UTC; 3 points) 's comment on What does your accuracy tell you about your confidence interval? by (
- 24 Apr 2011 21:40 UTC; 1 point) 's comment on “I don’t know.” by (
Eliezer missed the possibility that the tree had no apples, which is fairly likely.
This is exactly the kind of post that I’m at LW for. I am asked about 20-10,000 questions as day, the majority of which I have to answer “I don’t know” for. (How anyone parented before google is beyond me.) Often I use “I don’t know” as a replacement for “I don’t have the confidence to answer your question adequately[1] in the 15 seconds that I have before you ask another question.”
I understand that conversations with children might seem trivial to most here or that this post was never intended to be used in the context I’ve taken it. Also, it seems that “X” may be a non-rationalist and children usually are. (I think that its very possible that we are all born as rationalists.) So, although I may be beyond hope, my children are not. This post reminds me that along with answering questions I’m not only passing along what I know, I’m passing along my thinking process. I’m also directly transferring all my biases.
So what has come to me after reading this is that it’s far better for me to vocalize the process I’m going thru to find an answer rather than to try to just come up with one. And that my knee-jerk reaction to thinking “I need to answer” is a bias in itself—probably the result of decades of schooling and testing.
1: Often explanations are simplified to the extent that they become misleading or just wrong. eg: any non-local news story or a history textbook.
If you haven’t already seen it, this might interest you, it’s a pretty cool story. Also, this.
Thanks! Much appreciated.
I sometimes wish there was more parenting stuff on LW (and I suspect there will be in 10 years or so). But, then I think it’s just as well there isn’t as parenting forums are often more contentious than political ones.
I’m not sure that I agree with this.
“I don’t know” is an effective way of communicating “I probably don’t have any more information on the subject then you do”. That itself is both a useful and a meaningful thing to communicate.
Or, to put it another way; if EY said to me “there are between 10 and 1000 apples on that tree”, then I would use that assessment to update my Bayesian probability of how many apples are likely to be on the tree. However, if EY does not have any more information on the subject then I do, then updating my Bayesian probability based on his statement adds nothing but randomness to my answer.
His answer isn’t random. It’s based on his knowledge of apple trees in bloom (he states later that he assumed the tree was an apple tree in bloom). If you knew nothing about apple trees, or knew less than he did, or knew different but no more reliable information than he did, or were less able to correctly interpret what information you did have, then you would have learned something from him. If you had all the information he did, and believed that he was a rationalist and at the least not worse at coming to the right answer than you, and you had a different estimate than he did, then you still ought to update towards his estimate (Aumann’s Agreement Theorem).
This does illustrate the point that simply stating your final probability distribution isn’t really sufficient to tell everything you know. Not surprisingly, you can’t compress much past the actual original evidence without suffering at least some amount of information loss. How important this loss is depends on the domain in question. It is difficult to come up with a general algorithm for useful information transfer even just between rationalists, and you cannot really do it at all with someone who doesn’t know probability theory.
Interestingly enough, Eliezer answers a number of questions during his Q&A just this way.
Hmm. I have ready opinions on everything, whether I know what the hell I’m talking about or not. I think this may be a bad thing, therefore have taken to trying to say “I don’t know” when my reasons for my opinions (as best I can stacktrace them) don’t pass personal muster. Is this bad?
Yes! You’re communicating less information! Well, it could be a good thing, if you noticed an anti-correlation between your no-good-reason opinions and whatever the best-reasons opinion is. But I don’t think that would actually be the case.
I would advise something more like “I’m just talking out of my ass here, but it seems like”. For results of studies that I can’t remember details of I like “don’t quote me on this but”, for qualitative answers “no bloody idea, but” and for quantitative answers “I’m making this up, but”. That way you share both the opinion, and give them a reasonably good picture of how much support for your opinion you have.
If you were dealing with a Bayesian you could just say “I believe x, based on three signals of 5:1 in favour of x”. Obviously, this is gibberish to the average person—but a Bayesian would only update their posterior slightly on hearing your weakly-supported opinion, so using such phrases gets average people to only change their minds a little bit; thus being a little bit more like a Bayesian reasoner.
It doesn’t just depend on the accuracy of the opinions themselves, but rather on how the recipient weighs the information.
If you have a certainty of 55% on a binary (A) or (B) question, and you respond “it seems to me there’s a 55% chance of “Yes” ”, I wouldn’t be surprised if the non-perfect Bayesian recipient of the information would treat this as if you had mentioned a certainty of 75%.
In this situation, you’re better off just saying “I don’t know” (essentially 50%) which doesn’t bias the result towards the wrong direction.
Now if you believe the person you talk to will NOT put more weight on your information than they should, then sure, you should feel free to speak.
Hence my useful phrases; in practice they diminish the other person’s measure of my certainty significantly.
As long as you realize that you have some knowledge, just saying the words is not necessarily bad.
Are you talking about saying “I don’t know” verbally or mentally (ie to yourself)? Verbally, it is often the most efficient way to communicate some information. Mentally, it is usually good to explicitly think of what your ignorance prior is. If you have to act on the (lack of) knowledge, at least think of what possibilities are or are not permitted by the evidence. Of course, it is often better to ask someone than to examine your ignorance prior in detail, depending on the situation.
“It’s not that I don’t know. I’m using an ignorance prior.” I find this very silly.
The point is that “I don’t know” is still a lawful state of knowledge, you’re not allowing to make stuff up even if you “don’t know”, and in fact quite often you do know something.
I think the point is that there’s isn’t a special case of ‘no knowledge’, there’s almost always some knowledge and even the case of least knowledge isn’t a special magical category.
One use for “I don’t know” I’ve found is to signal a lack of confidence. If I say “I don’t know, but it seems like,” people will understand that I haven’t moved far from an ignorance prior.
Another useful function is the implied contrast: I know far more than you about voting theory, but even then I don’t know whether AV would be better, so you definitely shouldn’t claim to know.
I think the key issue with “I don’t know” (as with many other things) is that it should be a starting point rather than a stop sign. If you’re using it as a starting point, the phrase indicates that you don’t have enough relevant information to move forward but you would like to find out more. In everyday conversation, that means that you (or the person who asked you the question) are still in the information-gathering phase of the process, e.g. “I don’t know; let me google it” or “I don’t know; Rebecca might remember—have you asked her?” For rationalist-level problems, it means recognizing your ignorance (just as you should notice your confusion) rather than glossing over it or making up some plausible-sounding story that makes it seem like you understand what’s going on. “I don’t know” means that there’s work to be done in figuring it out better, and it’s worth taking a moment to see that you don’t have a clear answer yet before you start guessing.
The problem is when you take “I don’t know” to mean “it’s not worth thinking about”, or you turn it into a treasured mystery, or you conclude that you’re allowed to believe whatever you want (no need to worry about what’s true) because no one actually knows what’s true. You don’t know, and you’re fine with keeping it that way.
Eliezer actually discusses this briefly in his 12 virtues essay, where he says (in the first paragraph of the essay) that curiosity requires both recognizing your ignorance and seeking to relinquish it.
I think what Eliezer is arguing against is “I don’t know” being used to shield information that allows the speaker to privilege hypotheses that are obviously wrong if we had taken basic background knowledge into account. For example: “I don’t know if there is a teapot floating around the asteroid belt.” The teapot hypothesis is absurd if we base our priors on background knowledge, but the previous sentence makes it sound more plausible because it hides this information.