Reasonable Explanations
Today I watched a friend do calibration practice and was reminded of how wide you have to cast your net to get well-calibrated 90% confidence. This is true even when the questions aren’t gotchas, just because you won’t think of all the ways something could be wildly unlike your quick estimate’s model. Being well-calibrated for 90% confidence intervals (even though this doesn’t sound all that confident!) requires giving lots of room even in questions you really do know pretty well, because you will feel like you really do know pretty well when in fact you’re missing something that wrecks you by an order of magnitude.
Being miscalibrated can feel like “if it were outside of this range, I have just… no explanation for that”—and then it turns out there’s a completely reasonable explanation.
Anyway, I thought a fun exercise would be describing weird situations we’ve encountered that turned out to have reasonable explanations. In initial descriptions, present only the information you (and whoever was thinking it over with you) remembered to consider at the time, then follow up in ROT-13 with what made the actual sequence of events come clear.
- 25 Jun 2019 19:30 UTC; 11 points) 's comment on Causal Reality vs Social Reality by (
- 18 Jun 2019 23:21 UTC; 2 points) 's comment on Pattern’s Shortform Feed by (
meta: LW supports spoiler tags now.
I love this definition of miscalibration for high confidence!
Don’t have a personal example handy, but here is a classic one from Feynman:
His explanation:
From https://www.brainpickings.org/2017/10/17/richard-feynman-arline-letter/
And here I thought that the people recording the time of death had assumed it was a functional clock when it was not.
I was making tea. I poured hot water into a travel mug. The interior sides of the travel mug were silver. The liquid looked yellow. (Before I put the tea bag in.) To see if the yellow contamination had come from the kettle that I had heated over the stove, I poured some of the remaining water into the sink. That water was clear, with no evidence of a yellowish tinge. The mug had been taken from a cupboard of clean dishes. I was fairly certain I had looked in the mug before using it and seen that it was clean. After seeing the yellowish liquid, I still saw no other indication that the mug might have been dirty (no gunk on the inside or outside). Just the mysteriously yellow liquid.
Guvf jnf n onq pnfr bs zbzzl oenva. V cbherq ubarl vagb gur obggbz bs gur zht svefg, gura sbetbg V unq qbar gung ol gur gvzr V cbherq gur jngre va. Gur ubg jngre vzzrqvngryl qvffbyirq gur ubarl, naq vg tnir gur jngre gur lryybjvfu gvatr. V fcrag svir zvahgrf gelvat gb engvbanyyl qvntabfr gur ceboyrz orsber erzrzorevat nobhg gur ubarl.
I had a similar one to that, where I completely overwrote my actual memory of what happened with what habit said should have happened, where I went to get my bike from the garage and it was not there. But I clearly remembered having stored it in the garage the day prior.
Spoiler: I hadn’t. I’d gone to the store on the way back, left the bike locked in front of the store, then (since I almost always go to the store on foot) walked home. My brain internally rewrote this as “rode home, [stored bike, went to the store], went home.” (The [] part did not happen.)
Memory is weird, especially if your experience is normally highly compressible.
I’ll start.
A few years ago, I received a hand-addressed package with my correct name and address on it; the return address was a completely unfamiliar name in a state I’ve never visited and have no friends in. The contents were three Asterix books in the original French which I had no use for, did not know of anyone who wanted, and could not in fact read.
N sevraq unq hfrq obbx-fjnccvat jrofvgr Obbxzbbpu gb trg zr fbzr cerfragf n juvyr cerivbhfyl naq unqa’g erzrzorerq gb hcqngr gur nqqerff jura trggvat gurfr sbe ure uhfonaq.
There’s a question of specificity too—you could make a high-confidence prediction that guvf cnpxntr jnf frag gb lbh qhr gb na reebe, abg vagragvbanyyl but there was a very wide possibility space for jub znqr jung xvaq bs reebe.
The idea of confidence levels only works at fairly high abstractions of predictions. Nobody is 90% confident of extremely precise predictions, only of predictions that will cover a large amount (90%, in fact) of near-infinite variety of future states of the universe.