(Not replying “at the original post” because others haven’t and now this discussion is here.)
That fragment of “Final Words” is in a paragraph of consequences of underconfidence.
Suppose (to take a standard sort of toy problem) you have a coin which you know either comes up heads 60% of the time or comes up heads 40% of the time. (Note: in the real world there are probably no such coins, at least not if they’re tossed in a manner not designed to enable bias. But never mind.) And suppose you have some quantity of evidence about which sort of coin it is—perhaps derived from seeing the results of many tosses. If you’ve been tallying them up carefully then there’s not much room for doubt about the strength of your evidence, so let’s say you’ve just been watching and formed a general idea.
Underconfidence would mean e.g. that you’ve seen an excess of T over H over a long period, but your sense of how much information that gives you is wrong, so you think (let’s say) there’s a 55% chance that it’s a T>H coin rather than an H>T coin. So then someone trustworthy comes along and tells you he tossed the coin once and it came up H. That has probability 60% on the H>T hypothesis and probability 40% on the T>H hypothesis, so it’s 3:2 evidence for H>T, so if you immediately have to bet a large sum on either H or T you should bet it on H.
But maybe the _real_ state of your evidence before this person’s new information justifies 90% confidence that it’s a T>H coin, in which case that new information leaves you still thinking it’s more likely T>H, and if you immediately have to bet a large sum you should bet it in T.
Thus: if you are underconfident you may take advice you shouldn’t, because you underweight what you already know relative to what others can tell you.
Note that this is all true even if the other person is scrupulously honest, has your best interests at heart, and agrees with you about what those interests are.
(Not replying “at the original post” because others haven’t and now this discussion is here.)
That fragment of “Final Words” is in a paragraph of consequences of underconfidence.
Suppose (to take a standard sort of toy problem) you have a coin which you know either comes up heads 60% of the time or comes up heads 40% of the time. (Note: in the real world there are probably no such coins, at least not if they’re tossed in a manner not designed to enable bias. But never mind.) And suppose you have some quantity of evidence about which sort of coin it is—perhaps derived from seeing the results of many tosses. If you’ve been tallying them up carefully then there’s not much room for doubt about the strength of your evidence, so let’s say you’ve just been watching and formed a general idea.
Underconfidence would mean e.g. that you’ve seen an excess of T over H over a long period, but your sense of how much information that gives you is wrong, so you think (let’s say) there’s a 55% chance that it’s a T>H coin rather than an H>T coin. So then someone trustworthy comes along and tells you he tossed the coin once and it came up H. That has probability 60% on the H>T hypothesis and probability 40% on the T>H hypothesis, so it’s 3:2 evidence for H>T, so if you immediately have to bet a large sum on either H or T you should bet it on H.
But maybe the _real_ state of your evidence before this person’s new information justifies 90% confidence that it’s a T>H coin, in which case that new information leaves you still thinking it’s more likely T>H, and if you immediately have to bet a large sum you should bet it in T.
Thus: if you are underconfident you may take advice you shouldn’t, because you underweight what you already know relative to what others can tell you.
Note that this is all true even if the other person is scrupulously honest, has your best interests at heart, and agrees with you about what those interests are.