On the subject of advice to novices, I wanted to share a bit I got out of Understanding Uncertainty. This is going to seem painfully simple to a seasoned bayesian, but it’s not meant for you. Rather, it’s intended for someone who has never made a probability estimate before. Say a person has just learned about the bayesian view of probability and understands what a probability estimate is, actually translating beliefs into numerical estimates can still seem weird and difficult.
The book’s advice is to use the standard balls-in-an-urn model to get an intuitive sense of the probability of an event. Imagine an urn that contains fifty red balls and fifty white balls. If you imagine drawing a ball at random from that urn, you get an intuitive sense for an event that has fifty percent probability. Now either increase or decrease the number of red balls in the urn (while correspondingly altering the number of white balls so that the total number of balls still sums to one hundred) until the intuitive probability of drawing a red ball seems to match your intuitive probability of the event occuring. The number of red balls in the urn equals your (unexamined, uncorrected) probability estimate for the event.
Once you teach a person how to put numbers on their beliefs, you’ve helped them make a first step in overcoming bias, because numbers are easy to write down and check, and easy to communicate to other people. They can also begin to quantify their biases. Anyone can learn to repeat the phrase, “The availability heuristic causes us to estimate what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.” (guessing the teacher’s password) but it takes a rationalist to ask: how much, on average, does the availability heuristic reduce the accuracy of my beliefs? Where does it rank on the list of biases, in terms of the inaccuracy it causes?
On the subject of advice to novices, I wanted to share a bit I got out of Understanding Uncertainty. This is going to seem painfully simple to a seasoned bayesian, but it’s not meant for you. Rather, it’s intended for someone who has never made a probability estimate before. Say a person has just learned about the bayesian view of probability and understands what a probability estimate is, actually translating beliefs into numerical estimates can still seem weird and difficult.
The book’s advice is to use the standard balls-in-an-urn model to get an intuitive sense of the probability of an event. Imagine an urn that contains fifty red balls and fifty white balls. If you imagine drawing a ball at random from that urn, you get an intuitive sense for an event that has fifty percent probability. Now either increase or decrease the number of red balls in the urn (while correspondingly altering the number of white balls so that the total number of balls still sums to one hundred) until the intuitive probability of drawing a red ball seems to match your intuitive probability of the event occuring. The number of red balls in the urn equals your (unexamined, uncorrected) probability estimate for the event.
Once you teach a person how to put numbers on their beliefs, you’ve helped them make a first step in overcoming bias, because numbers are easy to write down and check, and easy to communicate to other people. They can also begin to quantify their biases. Anyone can learn to repeat the phrase, “The availability heuristic causes us to estimate what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples.” (guessing the teacher’s password) but it takes a rationalist to ask: how much, on average, does the availability heuristic reduce the accuracy of my beliefs? Where does it rank on the list of biases, in terms of the inaccuracy it causes?
Thanks, if this is what you’re saying it is, it’s something I’ve been looking for. :-)