I originally heard this point made by Ben Pace in Episode 126 of the Bayesian Conspiracy Podcast. Ben claimed that he learned this from the book How to Measure Anything, but I think I identified the relevant section, and this point wasn’t made explicitly.
Suppose that I came up to you and asked you for a 90% confidence interval for the weight of a wazlot. I’m guessing you would not really know where to start. However, suppose that I randomly sampled a wazlot and told you it weighed 142 grams. I’m guessing you would now have a much better idea of your 90% confidence interval (although you still wouldn’t have that good a guess at the width).
In general, if you are very ignorant about something, the first instance of that thing will tell you what domain you’re operating in. If you have no idea how much something weighs, knowing the weight tells you the reasonable orders of magnitude are. Things that sometimes weigh 142 grams don’t typically also sometimes weigh 12 solar masses. Similarly, things that take 5 minutes don’t typically also take 5 days, and things that are 5 cm long aren’t typically also 5 km long.
For more abstract concepts, having a single sample allows you to locate the concept in concept space by anchoring it to thing space. “Redness” cannot be properly understood until it is known that “apples are red”. “Functions” are incomprehensible until you know “adding one to a number” is a function. “Resources” are vague until you learn that “money is a resource”.
In reality, the first sample often gives you more information than a random sample. If I ask a friend for an example of a snack, they’re not going to randomly sample a snack and tell me about it; they’re probably going to pick a snack that is at the center of the space of all snacks, like potato chips.
From an information-theoretic perspective, the expected amount of information gained from the first sample must be the highest. If the sampling process is independently and identically distributed, the 2nd sample is expected to be more predictable given knowledge of the first sample. There is some chance that the first sample is misleading, but the probability that it’s misleading goes down the more misleading the sample is, so you don’t expect the first sample to be misleading. If you’re very ignorant, your best guess for the mean of a distribution is pretty close to the mean of the samples you have, even if you only have one.
This is one perspective on why asking for examples is so powerful; they typically give you the first sample, which contains the most information.
Mark mentions that he got this point from Ben Pace. A few months ago I heard the extended version from Ben, and what I really want is for Ben to write a post (or maybe a whole sequence) on it. But in the meantime, it’s an important idea, and this short post is the best source to link to on it.
I’m curious what you feels missing from this post such that it doesn’t just convey everything important about the concept? (I’ve heard Ben talk about it a bit and didn’t feel a strong sense that anything was missing here)
A lot of useful techniques can be viewed as ways to “get the first sample” in some sense. Fermi estimates are one example. Attempting to code something in Python is another.
(I’m not going to explain that properly here. Consider it a hook for a future post.)