“I don’t know.”
An edited transcript of a long instant-messenger conversation that took place regarding the phrase, “I don’t know”, sparked by Robin Hanson’s previous post, “You Are Never Entitled to Your Opinion.”
[08:50] Eliezer: http://www.overcomingbias.com/2006/12/you_are_never_e.html
[09:01] X: it still seems that saying “i don’t know” in some situations is better than giving your best guess
[09:01] X: especially if you are dealing with people who will take you at your word who are not rationalists
[09:02] Eliezer: in real life, you have to choose, and bet, at some betting odds
[09:02] Eliezer: i.e., people who want to say “I don’t know” for cryonics still have to sign up or not sign up, and they’ll probably do the latter
[09:03] Eliezer: “I don’t know” is usually just a screen that people think is defensible and unarguable before they go on to do whatever they feel like, and it’s usually the wrong thing because they refused to admit to themselves what their guess was, or examine their justifications, or even realize that they’re guessing
[09:02] X: how many apples are in a tree outside?
[09:02] X: i’ve never seen it and neither have you
[09:02] Eliezer: 10 to 1000
[09:04] Eliezer: if you offer to bet me a million dollars against one dollar that the tree outside has fewer than 20 apples, when neither of us have seen it, I will take your bet
[09:04] X: is it better to say “maybe 10 to 1000” to make it clear that you are guessing when talking to people
[09:04] Eliezer: therefore I have assigned a nonzero and significant probability to apples < 20 whether I admit it or not
[09:05] Eliezer: what you *say* is another issue, especially when speaking to nonrationalists, and then it is well to bear in mind that words don’t have fixed meanings; the meaning of the sounds that issue from your lips is whatever occurs in the mind of the listener. If they’re going to misinterpret something then you shouldn’t say it to them no matter what the words mean inside your own head
[09:06] Eliezer: often you are just screwed unless you want to go back and teach them rationality from scratch, and in a case like that, all you can do is say whatever creates the least inaccurate image
[09:06] X: 10 to 1000 is misleading when you say it to a nonrationalist?
[09:06] Eliezer: “I don’t know” is a good way to duck when you say it to someone who doesn’t know about probability distributions
[09:07] Eliezer: if they thought I was certain, or that my statement implied actual knowledge of the tree
[09:07] Eliezer: then the statement would mislead them
[09:07] Eliezer: and if I knew this, and did it anyway for my own purposes, it would be a lie
[09:08] Eliezer: if I just couldn’t think of anything better to say, then it would be honest but not true, if you can see the distinction
[09:08] Eliezer: honest for me, but the statement that formed in their minds would still not be true
[09:09] X: most people will say to you.… but you said....10-1000 apples
[09:09] Eliezer: then you’re just screwed
[09:10] Eliezer: nothing you can do will create in their minds a true understanding, not even “I don’t know”
[09:10] X: why bother, why not say i don’t know?
[09:10] Eliezer: honesty therefore consists of misleading them the least and telling them the most
[09:10] X: it’s better than misleading them with 10-1000
[09:10] Eliezer: as for “why bother”, well, if you’re going to ask that question, just don’t reply to their email or whatever
[09:11] Eliezer: what if you’re dealing with someone who thinks my saying “I don’t know” is a license for them to make up their own ideas, which will be a lot worse?
[09:11] X: they may act on your guess, and then say “but you said....” and lose money or get in trouble or have less respect for you
[09:11] Eliezer: then you choose to wave them off
[09:11] Eliezer: with “I don’t know”
[09:11] Eliezer: but it’s for your own sake, not for their sake
[09:12] X: [09:11] Eliezer: what if you’re dealing with someone who thinks my saying “I don’t know” is a license for them to make up their own ideas, which will be a lot worse?
[09:12] X: here i could see why
[09:12] X: but it’s difficult working with typical people in the real world
[09:13] Eliezer: the first thing to decide is, are you trying to accomplish something for yourself (like not getting in trouble) or are you trying to improve someone else’s picture of reality
[09:13] Eliezer: “I don’t know” is often a good way of not getting in trouble
[09:13] Eliezer: as for it being difficult to work with people in the real world, well, yeah
[09:13] X: if you say...10-1000, and you are wrong, and they are mad, then you say, i don’t know, they will be even madder
[09:13] Eliezer: are you trying not to get in trouble?
[09:14] Eliezer: or are you trying to improve their picture of reality?
[09:14] Eliezer: these are two different tasks
[09:14] X: especially if they have lost money or have been proven wrong by someone else
[09:14] Eliezer: if they intersect you have to decide what your tradeoff is
[09:14] Eliezer: which is more important to you
[09:14] Eliezer: then decide whether to explain for their benefit or say “I don’t know” for yours
[09:15] X: well, if it was my job, i would say i don’t know rather than be wrong, because who knows what your boss will do after he loses money listening to you
[09:15] Eliezer: okay
[09:16] Eliezer: just be clear that this is not because “I don’t know” is the rational judgment, but because “I don’t know” is the political utterance
[09:16] X: he may take your guess, and try to turn it into an actual anwser because no one around you has a better plan
[09:17] Eliezer: you can easily see this by looking at your stated reason: you didn’t talk about evidence and reality and truth, but, how you might get in trouble based on someone’s reaction
[09:17] X: yes
[09:17] X: that’s what you have to put up with in the real world
[09:17] Eliezer: if you’re really worried about your boss’s welfare then you should consider that if you say “I don’t know” he must do something anyway—refusing to choose is also a choice, and refusing to act is like refusing to let time pass—and he will construct that plan based on some information, which doesn’t include your information
[09:18] Eliezer: if your life isn’t worth more than someone else’s, neither it is worth any less, and it is often proper to let fools make their own mistakes
[09:18] Eliezer: you can only throw yourself in front of so many bullets before you run out of flesh to stop them with
[09:19] X: ?
[09:19] Eliezer: in other words, you cannot always save people from themselves
[09:23] Eliezer: but all of this is wandering away from the original point, which is true and correct, that no one is ever entitled to their own opinion
[09:26] X: what is his name?
[09:26] Eliezer: ?
[09:26] X: a man outside
[09:26] X: random guy
[09:26] Eliezer: It’s probably not “Xpchtl Vaaaaaarax”
[09:26] X: probably not
[09:27] Eliezer: I suppose I could construct a second-order Markov transition diagram for the letters in names expressed in English, weighted by their frequency
[09:27] Eliezer: but that would be a lot of work
[09:28] Eliezer: so I could say “I don’t know” as shorthand for the fact that, although I possess a lot of knowledge about possible and probable names, I don’t know anything *more* than you do
[09:28] X: ok, so you say ruling out what you see as likely not correct is ok?
[09:28] Eliezer: what I’m saying is that I possess a large amount of knowledge about possible names
[09:28] Eliezer: all of which influences what I would bet on
[09:28] Eliezer: if I had to take a real-world action, like, guessing someone’s name with a gun to my head
[09:29] Eliezer: if I had to choose it would suddenly become very relevant that I knew Michael was one of the most statistically common names, but couldn’t remember for which years it was the most common, and that I knew Michael was more likely to be a male name than a female name
[09:29] Eliezer: if an alien had a gun to its head, telling it “I don’t know” at this point would not be helpful
[09:29] Eliezer: because there’s a whole lot I know that it doesn’t
[09:30] X: ok
[09:33] X: what about a question for which you really don’t have any information?
[09:33] X: like something only an alien would know
[09:34] Eliezer: if I have no evidence I use an appropriate Ignorance Prior, which distributes probability evenly across all possibilities, and assigns only a very small amount to any individual possibility because there are so many
[09:35] Eliezer: if the person I’m talking to already knows to use an ignorance prior, I say “I don’t know” because we already have the same probability distribution and I have nothing to add to that
[09:35] Eliezer: the ignorance prior tells me my betting odds
[09:35] Eliezer: it governs my choices
[09:35] X: and what if you don’t know how to use an ignorance prior
[09:36] X: have never heard of it etc
[09:36] Eliezer: if I’m dealing with someone who doesn’t know about ignorance priors, and who is dealing with the problem by making up this huge elaborate hypothesis with lots of moving parts and many places to go wrong, then the truth is that I automatically know s/he’s wrong
[09:36] Eliezer: it may not be possible to explain this to them, short of training them from scratch in rationality
[09:36] Eliezer: but it is true
[09:36] Eliezer: and if the person trusts me for a rationalist, it may be both honest and helpful to tell them, “No, that’s wrong”
[09:36] X: what if that person says, “i don’t know what their name is”, that ok?
[09:37] Eliezer: in real life you cannot choose “I don’t know”, it’s not an option on your list of available actions
[09:37] Eliezer: in real life it’s always, “I don’t know, so I’m going to say Vkktor Blackdawn because I think it sounds cool”
[09:39] Eliezer: Vkktor Blackdawn is as (im)probable as anything else, but if you start assigning more probability to it than the ignorance prior calls for—because it sounds cool, because you don’t have room in your mind for more than one possibility, or because you’ve started to construct an elaborate mental explanation of how the alien might end up named Vkktor Blackdawn
[09:39] Eliezer: then I know better
[09:40] Eliezer: and if you trust me, I may be able to honestly and usefully tell you so
[09:40] Eliezer: rather than saying “I don’t know”, which is always something to say, not to think
[09:40] Eliezer: this is important if someone asks you, “At what odds would you bet that the alien is named Vkktor Blackdawn?”
[09:41] Eliezer: or if you have to do anything else, based on your guesses and the weight you assign to them
[09:41] Eliezer: which is what probability is all about
[09:41] X: and if they say “I don’t know, I don’t know anything about probability”?
[09:41] Eliezer: then either they trust me blindly or I can’t help them
[09:41] Eliezer: that’s how it goes
[09:41] Eliezer: you can’t always save people from themselves
[09:42] X: trust you blindly about what you are saying or about your guess as to what the alien’s name is?
[09:42] Eliezer: trust me blindly when I tell them, “Don’t bet at those odds.”
- Reality-Revealing and Reality-Masking Puzzles by 16 Jan 2020 16:15 UTC; 264 points) (
- What Bayesianism taught me by 12 Aug 2013 6:59 UTC; 114 points) (
- Zen and Rationality: Don’t Know Mind by 6 Aug 2020 4:33 UTC; 25 points) (
- Unknown unknowns by 5 Aug 2011 12:55 UTC; 19 points) (
- What makes a probability question “well-defined”? (Part I) by 2 Oct 2022 21:05 UTC; 14 points) (
- 23 Jun 2010 15:21 UTC; 14 points) 's comment on Is cryonics necessary?: Writing yourself into the future by (
- 23 Jan 2010 20:47 UTC; 13 points) 's comment on Normal Cryonics by (
- [SEQ RERUN] “I don’t know.” by 24 Apr 2011 21:40 UTC; 9 points) (
- What makes a probability question “well-defined”? (Part II: Bertrand’s Paradox) by 3 Jan 2023 22:39 UTC; 7 points) (
- 13 Nov 2011 20:54 UTC; 6 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 23 Feb 2010 20:14 UTC; 6 points) 's comment on Open Thread: February 2010, part 2 by (
- 18 May 2010 2:45 UTC; 5 points) 's comment on Multiple Choice by (
- 1 Apr 2023 0:19 UTC; 4 points) 's comment on Correcting a misconception: consciousness does not need 90 billion neurons, at all by (
- 21 Dec 2010 6:08 UTC; 4 points) 's comment on A fun estimation test, is it useful? by (
- 27 Sep 2021 22:53 UTC; 4 points) 's comment on The Trolley Problem by (
- 17 Sep 2010 1:54 UTC; 3 points) 's comment on The conscious tape by (
- 3 Nov 2011 14:49 UTC; 3 points) 's comment on What does your accuracy tell you about your confidence interval? by (
- 18 Oct 2015 10:18 UTC; 3 points) 's comment on Open thread, Oct. 5 - Oct. 11, 2015 by (
- 17 Feb 2024 9:06 UTC; 2 points) 's comment on CstineSublime’s Shortform by (
- 14 May 2009 18:48 UTC; 2 points) 's comment on Survey Results by (
- 21 Jan 2009 22:27 UTC; 1 point) 's comment on Failed Utopia #4-2 by (
- 14 Feb 2014 17:13 UTC; 0 points) 's comment on Rationality Quotes February 2014 by (
- 7 Apr 2007 2:54 UTC; 0 points) 's comment on Knowing About Biases Can Hurt People by (
- 11 Aug 2013 12:56 UTC; 0 points) 's comment on What Bayesianism taught me by (
- 28 Sep 2009 17:22 UTC; 0 points) 's comment on Your Most Valuable Skill by (
- 23 Jul 2011 3:58 UTC; 0 points) 's comment on Those who aspire to perfection by (
- 28 Apr 2011 17:32 UTC; 0 points) 's comment on What To Do: Environmentalism vs Friendly AI (John Baez) by (
- 13 Nov 2011 11:09 UTC; 0 points) 's comment on Rationality Quotes November 2011 by (
- 18 Dec 2009 20:00 UTC; 0 points) 's comment on Reacting to Inadequate Data by (
- 21 May 2013 11:45 UTC; -2 points) 's comment on Open thread, May 17-31 2013 by (
I guess the main issue raised here is not what to believe, but what to say to people who may misinterpret you. In principle any answer might be justified, though I worry that people will try to excuse their irrational beliefs by saying that they are just talking that way to deal with irrational others.
Rather than “I don’t know”, I like to use either “no data” or “insufficient data”. I am enough of a geek that it is considered—for me—a “reasonable utterance”, and it’s easier to qualify a quantitative answer if I’m later pressed. And BTW, not having seen that tree, zero is much better lower bound.
I use “Insufficient data for meaningful answer”, which is apposite and also a nice sci-fi shout-out.
Is it reasonable to distinguish between probabilities we are sure of, and probabilities we are unsure of? We know the probability that rolling a die will get a 6 is 1⁄6, and feel confident about that. But what is the probability of ancient life on Mars? Maybe 1⁄6 is a reasonable guess for that. But our probabilistic estimates feel very different in the two cases. We are much more tempted to say “I don’t know” in the second case. Is this a legitimate distinction in Bayesian terms, or just an illusion?
The root of the different feel for your estimates is that you’re comparing two different kinds of probability. You can only ever test The Ancient Life on Mars proposition once, whereas you can test the die a sufficient number of times to determine that the probability of rolling a 6 on any given throw is 1⁄6.
Let’s say that after you make your predictions your interrogator puts a gun to your head and asks, “We can either roll this die or determine whether there was Ancient Life on Mars, both will take the same amount of time. If you choose to roll the die and it’s a 6 we shoot you. If you choose to bet on Ancient Life on Mars and there was ancient life, we shoot you. Which do you choose?” In this scenario if you feel inclined to choose one option over the other (for want of not getting shot) then you don’t in truth agree with your 1⁄6 probability estimate for at least one of the options. You should feel the same about two different 1⁄6 probabilistic estimates when they’ve each been reduced to a one-shot, unreproducible test.
Hal, you have to bet at scalar odds. You’ve got to use a scalar quantity to weight the force of your subjective anticipations, and their associated utilities. Giving just the probability, just the betting odds, just the degree of subjective anticipation, does throw away information. More than one set of possible worlds, more than one set of admissible hypotheses, more than one sequence of observable evidence, can yield the final summarized judgment that a certain probability is 1⁄6.
The amount of previously observed evidence can determine how easy it is for additional evidence to shift our beliefs, which in turn determines the expected utility of looking for more information. I think this is what you’re looking for.
But when you have to actually bet, you still bet at 1:5 odds. If that sounds strange to you, look up “ambiguity aversion”—considered a bias—as demonstrated at e.g. http://en.wikipedia.org/wiki/Ellsberg_paradox
PS: Personally I’d bet a lot lower than 1⁄6 on ancient Mars life. And Tom, you’re right that 0 is a safer estimate than 10, but so is 9, and I was assuming the tree was known to be an apple tree in bloom.
An assumption with no basis, I trust you realized on reflection.
Why exactly is using/uttering an ignorance prior better than “I don’t know?” The two convey exactly the same amount of information (“speaker has no data”). It seems to me that the only difference is that the former conveys additional worthless information in the form of an estimate of a probability that bears no necessary relationship to reality.
Paul, the claim isn’t that “I don’t know” is never right. The claim is that you should only say it when it is true.
The claim that “when you have to actually bet, you still bet at 1:5 odds” overlooks some information that is commonly communicated via markets. When I trade on a market, I often do it by submitting a bid (offer to buy) and/or an ask (offer to sell). The difference between the prices at which I’m willing to place those two kinds of orders communicates something beyond what I think the right odds are. If I’m willing to buy “Hillary Clinton Elected President in 2008″ at 23 and sell at 29, and only willing to buy “Person Recovers from Cryonic Suspension by 2040” at 8 and sell at 44, that tells you I’m more uncertain about wise odds for cryonics than for the 2008 election. For more sophisticated markets, option prices could communicate even more info of this sort.
There’s no rational reason to do this. If you think that X has more than a 25% chance of being true given that the market is at 25%, you’d buy at 25%. If you think it has less than a 25% chance of being true, you’d sell at 25%.
There’s no way you’re going to think that it has exactly an 8% chance of being true given that the market is at 8% and exactly a 44% chance of being true given that the market is at 44%. If you’re really more sure of the market than yourself, it will be close, but you can always improve it slightly.
Risk aversion and transaction costs are both real and reasonable things. If I think there’s a 25% chance of X, and someone else thinks there’s a 24% chance of X, it’s not worthwhile for us to bet on whether or not X will be true, because there’s so little money on the table and so much variability in whether or not X will happen.
Really? What about the Kelly Criterion
The Kelly Criterion is when you’re betting with something that you value logarithmically. That is, doubling it gives you a constant utility. As such, it’s not an even bet. For example, if you have $1500, and you’ve already bet $500 and you’re considering betting another $1, you’re comparing gaining $1 when you have $2000 with losing $1 when you have $1000. Since the dollar is twice as valuable in the second case, you’re actually betting at 1:2 odds.
Also, the Kelly Criterion limits the amount you’re betting based on your certainty. You still bet something.
I think another useful function of “I don’t know” is to indicate to others or remind oneself that one doesn’t know enough to proposing a solution yet. (But that article was written after this one.)
Refraining from questioning the meaning of “to proposing”, why is there a degree symbol in your link? Was that added by the site?
The little circle is added by the site to indicate internal links. Apparently the purpose is to indicate that you can hover over the link to get a preview of the target page.
Yep, that’s correct. We experimented with some other indicators, but this was the one that seemed least intrusive.
Assuming that I did not miss anything, what is wrong with phrasing your best guess as your best guess when you give it to the inquiring person? e.g.:
“How many apples are in a tree outside?”
“I’d guess/think/suppose/estimate 10-1000.”
At which point they can either proceed, knowing that you gave no guarantees for the information, or inquire further to determine your certainty and then proceed.
A lot of people would take a definite answer as an indication of direct knowledge. Is it rational to assume that no one has direct knowledge when given a definite answer?
There is more discussion of this post here as part of the Rerunning the Sequences series.
It seems to me that “I don’t know” in many contexts really means “I don’t know any better than you do”, or “Your guess is as good as mine”, or “I have no information such that sharing it would improve the accuracy of your estimate”, or “We’ve neither of us seen the damn tree, what are you asking me for?”.
This feels like a nothing response, because it kind of is, but you aren’t really saying “My knowledge of this is zero”, you’re saying “My knowledge of this which is worth communicating is zero”.
I propose that “I don’t know” between fully co-operative rationalists is shorthand for “my knowledge is so weak that I expect you would find negative value in listening to it.” Note that this means whether I say “I don’t know” depends in part on my model of you.
For example, if someone who rarely dabbles in medicine asks me if I think a cure works, and I’ve only skimmed the paper that proposes it, I might well explain how low the prior is and how shaky this sort of research tends to be. If an expert asked me the same question, I’d say “I don’t know” because they already know all that and are asking if I have any unique insight, which I don’t.
Similarly, if someone asks how much a box weighs, and I’m 95% confident it’s between 10 and 50 pounds, I’ll say “I don’t know”, because that range is too wide to be useful for most purposes. But if they follow up with “I’m thinking of shipping it fedex which has a 70 pound maximum”, then I can answer “I’m 95% confident it’s less than 70 pounds.” Though if they also say that if the shipment doesn’t go smoothly the mafia will kill them, my answer is “the scale’s in the bathroom”, because now 95% confidence isn’t good enough.
This does mean that “I don’t know” is a valid answer if my knowledge is so uncompressible that it cannot be transmitted within your patience. I don’t have a good example for this, but I don’t see it as a problem.