Metauncertainty
Response to: When (Not) To Use Probabilities
“It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.” —E. T. Jaynes
The uncertainty due to vague (non math) language is no different than uncertainty by way of “randomizing” something (after all, probability is in the mind). The principle still holds; you should be able to come up with a better way of doing things if you can put in the extra thought.
In some cases, you can’t afford to waste time or it’s not worth the thought, but when dealing with things such as the deciding whether to run the LHC or signing up for cryonics, there’s time, and it’s sorta a big deal, so it pays to do it right.
If you’re asked “how likely is X?”, you can answer “very unlikely” or “0.127%”. The latter may give the impression that the probability is known more precisely than it is, but the first is too vague; both strategies do poorly on the log score.
If you are unsure what probability to state, state this with… another probability distribution.
“My probability distribution over probabilities is an exponential with a mean of 0.127%” isn’t vague, it isn’t overconfident (at the meta^1 level), and gives you numbers to actually bet on.
The expectation value of the metaprobability distribution (integral from 0 to 1 of Pmeta*p*dp) is equal to the probability you give when trying to maximize your expected log score .
To see this, we write out the expected log score (Integral from 0 to 1 of Pmeta*(p*log(q)+(1-p)log(1-q))dp). If you split this into two integrals and pull out the terms that are independent of p, the integrals just turn into the expectation value of p, and the formula is now that of the log score with p replaced with mean(p). We already know that the log score is maximized when q = p, so in this case we set q = mean(p)
This is a very useful result when dealing with extremes where we are not well calibrated. Instead of punting and saying “err… prolly aint gonna happen”, put a probability distribution on your probability distribution and take the mean. For example, if you think X is true, but you don’t know if you’re 99% sure or 99.999% sure, you’ve got to bet at ~99.5%.
It is still no guarantee that you’ll be right 99.5% of times (by assumption we’re not calibrated!), but you can’t do any better given your metaprobability distribution.
You’re not saying “99.5% of the time I’m this confident, I’m right”. You’re just saying “I expect my log score to be maximized if I bet on 99.5%”. The former implies the latter, but the latter does not (necessarily) imply the former.
This method is much more informative than “almost sure”, and gives you numbers to act on when it comes time to “shut up and multiply”. Your first set of numbers may not have “come from numbers”, but the ones you quote now do, which is an improvement. Theoretically this could be taken up a few steps of meta, but once is probably enough.
Note: Anna Salamon’s comment makes this same point.
- How to come up with verbal probabilities by 29 Apr 2009 8:35 UTC; 27 points) (
- 4 May 2009 0:31 UTC; 5 points) 's comment on The mind-killer by (
- 27 May 2010 1:23 UTC; 4 points) 's comment on Abnormal Cryonics by (
- 26 Jul 2009 20:21 UTC; 4 points) 's comment on Bayesian Flame by (
- 2 Jan 2010 1:28 UTC; 3 points) 's comment on New Year’s Predictions Thread by (
- 8 Jul 2011 18:38 UTC; 3 points) 's comment on Casey Anthony—analyzing evidence using Bayes by (
- 8 Jul 2011 3:38 UTC; 3 points) 's comment on Casey Anthony—analyzing evidence using Bayes by (
- 5 May 2009 20:36 UTC; 1 point) 's comment on Bead Jar Guesses by (
- 10 Dec 2009 0:50 UTC; 1 point) 's comment on You Be the Jury: Survey on a Current Event by (
- 4 May 2009 21:09 UTC; 0 points) 's comment on Bead Jar Guesses by (
- 19 Jan 2012 14:51 UTC; 0 points) 's comment on The Savage theorem and the Ellsberg paradox by (
This is a non-trivial way to coax more information from your brain: try to quantize not only gut feelings about real-world events, but also gut feelings about those gut feelings. Thanks, upvote. But I wonder if this technique actually makes you better calibrated in the end; there seems to be no way out except experiment.
I have a lot of experience with gambling and I do this reguarly. I can verify that in my experience makes you better calibrated. What I’ve had success with is to generate a probability range before I incorporate the market opinion, then use it to generate another. I find the key in practice is not to define a mathematical distribution but to give a mean prediction and a range that you find plausable which should have a probabiility of around 95%. Often the mean is not centered.
This post should link to Probability is Subjectively Objective.
Actually, this is probably always the more correct thing to do. What you’re calling a “probability distribution over probabilities” or a “metaprobability distribution” is a probability distribution.