Epistemic Status: I was about to write a post on this, and then realized I hadn’t actually tried to use this technique that much since coming up with a year ago. I think this is mostly because I didn’t try rather than because the technique was demonstrably not good (although obviously it wasn’t so useful that practicing the skill was self-reinforcing). For now I’m writing a shortform post and giving it a more dedicated effort for the next month.
Eliezer talks about “Noticing Confusion” a lot, which took a long time for me to really grok. Confusion is specifically hard to notice, and that’s kinda what makes it confusion. My take from Sunset at Noon:
The phenomenon here is when bits of evidence don’t add up, and you get a subtle sense of wrongness. But then instead of heeding that wrongness and making sense of it, you round the evidence to zero, or you round the situation to the nearest plausible cliché.
This manifests (in me, anyway) as a slight feeling of “hrmm. ? huh”, that then usually passes before I do anything with it. Later on, I might retroactively realize I was confused (once it becomes extremely apparent why the evidence wasn’t adding up. i.e. if I notice my friends are acting a bit weird, but I shrug it off. But later it turns out they were planning a surprise birthday for me. Afterwards I’ll remember the sense of weirdness and think “man, I should have been able to figure out something was up.”)
Noticing confusion in the moment is hard.
But, surprise is more noticeable.
Like, the magnitude of how strong a signal confusion is, on a 1-10 scale, is… less than 1. But the magnitide of the signal of surprise is like a 2 or 3. (Compared to stubbing my toe, which is like an 8). I still need to practice noticing surprise, but it’s at least achievable.
Surprise is different from confusion – surprise seems like “something happened that I wasn’t predicting.” And confusion is more like “something is happening that’s outside my model, but my brain is wired to fit thing into my model if it can, so I don’t necessarily notice.”
Sometimes failed-predictions fall within your model. You might be surprised if a given person is struck my lightning, but not necessarily confused, because “get struck by lightning” is a clear explanation.
Whereas if a thousand people in a town were all struck by lightning in one day, you might be confused, because that’s an extreme enough outlier that it might suggest there is some phenomenon you don’t understand at work.
Failed predictions overlap enough with “things outside my model” that it seems like, if you can get good at noticing surprise, you can then check for “Is this confusing? Does it fit neatly into my existing models?” and then go on to generate hypotheses and prune them.
(Sunset at noon is a lengthy essay that, among other things, goes into more detail about how this all fits into my picture of rationality)
I don’t know where the threshold is, but I’d think there is some number of simultaneous lightning strikes where the likelihood of them happening at once is outweighed by there being some kind of phenomenon that wasn’t in my model. (i.e. looks like about 900,000 lightning strikes happen yearly in Louisiana, so if a million happened in one day in one town that seems outside of model. Dunno if 1000 in one town in one day is something that’s been recorded)
Noticing surprise to help you notice confusion.
Epistemic Status: I was about to write a post on this, and then realized I hadn’t actually tried to use this technique that much since coming up with a year ago. I think this is mostly because I didn’t try rather than because the technique was demonstrably not good (although obviously it wasn’t so useful that practicing the skill was self-reinforcing). For now I’m writing a shortform post and giving it a more dedicated effort for the next month.
Eliezer talks about “Noticing Confusion” a lot, which took a long time for me to really grok. Confusion is specifically hard to notice, and that’s kinda what makes it confusion. My take from Sunset at Noon:
This manifests (in me, anyway) as a slight feeling of “hrmm. ? huh”, that then usually passes before I do anything with it. Later on, I might retroactively realize I was confused (once it becomes extremely apparent why the evidence wasn’t adding up. i.e. if I notice my friends are acting a bit weird, but I shrug it off. But later it turns out they were planning a surprise birthday for me. Afterwards I’ll remember the sense of weirdness and think “man, I should have been able to figure out something was up.”)
Noticing confusion in the moment is hard.
But, surprise is more noticeable.
Like, the magnitude of how strong a signal confusion is, on a 1-10 scale, is… less than 1. But the magnitide of the signal of surprise is like a 2 or 3. (Compared to stubbing my toe, which is like an 8). I still need to practice noticing surprise, but it’s at least achievable.
Surprise is different from confusion – surprise seems like “something happened that I wasn’t predicting.” And confusion is more like “something is happening that’s outside my model, but my brain is wired to fit thing into my model if it can, so I don’t necessarily notice.”
Sometimes failed-predictions fall within your model. You might be surprised if a given person is struck my lightning, but not necessarily confused, because “get struck by lightning” is a clear explanation.
Whereas if a thousand people in a town were all struck by lightning in one day, you might be confused, because that’s an extreme enough outlier that it might suggest there is some phenomenon you don’t understand at work.
Failed predictions overlap enough with “things outside my model” that it seems like, if you can get good at noticing surprise, you can then check for “Is this confusing? Does it fit neatly into my existing models?” and then go on to generate hypotheses and prune them.
(Sunset at noon is a lengthy essay that, among other things, goes into more detail about how this all fits into my picture of rationality)
I think 1000 people being struck by lightning would register as a gigantic surprise, not a less-than-1-signal-confusion.
I don’t know where the threshold is, but I’d think there is some number of simultaneous lightning strikes where the likelihood of them happening at once is outweighed by there being some kind of phenomenon that wasn’t in my model. (i.e. looks like about 900,000 lightning strikes happen yearly in Louisiana, so if a million happened in one day in one town that seems outside of model. Dunno if 1000 in one town in one day is something that’s been recorded)
Create a machine that creates lightning strikes.