As the arm began to unfold, the teacher smiled, and said only, “I assign 1% probability to the proposition ‘a white ball will be drawn,’ and 99% probability to ‘a red ball will be drawn.’”
The woman with the urn cocked her head and said, “Huh, you three are dressed like rationalists, and yet you seem awfully certain that I told the truth about the arm drawing balls from the urn…”
The arm whirred into motion.
I am waiting for the woman to ask me to place a bet or pay a price, because surely she wouldn’t play ball-and-urn games with passing travelers with nothing in it for her but functioning as side character in a parable about probability.
Most people, upon encountering the parable above, think that it is obvious. Almost everybody who hears me tell it in person just nods, but most of them fail to deeply integrate its lesson.
Feh. Sorry, but I think most of humanity does think of belief in quantitative terms. Folk epistemology talks of believing strongly or weakly, not of picking maximum-a-posteriori estimates.
Our language paints beliefs as qualitative, we speak of beliefs as if they are binary things. You either know something or you don’t. You either believe me or you don’t. You’re either right or you’re wrong.
Language must be succinct. Sometimes, this can make it very confusing.
Traditional science, as it’s taught in schools, propagates this fallacy. The statistician’s role (they say) is to identify two hypotheses, null and alternative, and then test them, and then it is their duty (they say) to believe whichever hypothesis the data supports.
This depends where you go to school. Years before I took Technion’s Intro to Statistics course, I took Reasoning About Uncertainty at UMass Amherst, and David Barrington taught only the Bayesian perspective—to the point that seeing the Intro to Stats teacher declare, “A likelihood is not a probability!” utterly boggled me. Because after all, in all my previous schooling, a likelihood had just been a funny name for a conditional probability distribution.
Also, I think that as “Bayesian” as we like to be on this site, putting down frequentist statistics is simply a bad idea. When you possess both the data and the computing power to train a Fully Ideal Bayesian generative model, that model minimizes prediction error (the Fable of the Bayes-Optimal Classifier). When you actually need to minimize prediction error in real life, with slow computers and little training data, training a discriminative, noncausal model is often the Right Thing.
And likewise, when you need to prove that some observed qualitative pattern did not happen by experimenter error, bias, or other self-delusion, and you indeed don’t have much computing power to build a predictive model at all, then you have found the appropriate place for frequentist statistics. They are the Guards at the Gate of mainstream science precisely because they guard against the demonic enemies that actually assault mainstream science: experimenter error, experimenter egotism, self-promotion, and the human being’s inductive bias to see causality where there is none. It is still approximate-Bayesian, bounded-rational to guard against the most common problems first, especially for people who were not explicitly trained in how to form priors with the lagom/Goldilocks amount of informedness, or better yet, explicitly trained in how to handle mixture models that allow for some amount of out-of-model error.
Also, and this relates to that other post I made the other day, I find this wannabe-value-neutral Jedi Knight crap very distasteful. We’re all human beings here: we ought to speak as having the genuine concerns of real people. One does not pursue “rationality” out of an abstract love for probability or logic: that path leads to the mad labyrinths of Platonism and Philosophy, and eventually dumps its benighted walkers into the Catholic Church. You pursue winning, and find where that takes you, and avoid being turned from your goal even in the name of Rationality (since rationality, after all, is not a terminal goal). There must be some way in which you will the world outside your mind to change, or you will not be able to chain your mind to the real world.
(Please tell me I just founded the LWian Sith. I have plans for hilarious initiation rituals.)
I am waiting for the woman to ask me to place a bet or pay a price, because surely she wouldn’t play ball-and-urn games with passing travelers with nothing in it for her but functioning as side character in a parable about probability.
Feh. Sorry, but I think most of humanity does think of belief in quantitative terms. Folk epistemology talks of believing strongly or weakly, not of picking maximum-a-posteriori estimates.
Language must be succinct. Sometimes, this can make it very confusing.
This depends where you go to school. Years before I took Technion’s Intro to Statistics course, I took Reasoning About Uncertainty at UMass Amherst, and David Barrington taught only the Bayesian perspective—to the point that seeing the Intro to Stats teacher declare, “A likelihood is not a probability!” utterly boggled me. Because after all, in all my previous schooling, a likelihood had just been a funny name for a conditional probability distribution.
Also, I think that as “Bayesian” as we like to be on this site, putting down frequentist statistics is simply a bad idea. When you possess both the data and the computing power to train a Fully Ideal Bayesian generative model, that model minimizes prediction error (the Fable of the Bayes-Optimal Classifier). When you actually need to minimize prediction error in real life, with slow computers and little training data, training a discriminative, noncausal model is often the Right Thing.
And likewise, when you need to prove that some observed qualitative pattern did not happen by experimenter error, bias, or other self-delusion, and you indeed don’t have much computing power to build a predictive model at all, then you have found the appropriate place for frequentist statistics. They are the Guards at the Gate of mainstream science precisely because they guard against the demonic enemies that actually assault mainstream science: experimenter error, experimenter egotism, self-promotion, and the human being’s inductive bias to see causality where there is none. It is still approximate-Bayesian, bounded-rational to guard against the most common problems first, especially for people who were not explicitly trained in how to form priors with the lagom/Goldilocks amount of informedness, or better yet, explicitly trained in how to handle mixture models that allow for some amount of out-of-model error.
Also, and this relates to that other post I made the other day, I find this wannabe-value-neutral Jedi Knight crap very distasteful. We’re all human beings here: we ought to speak as having the genuine concerns of real people. One does not pursue “rationality” out of an abstract love for probability or logic: that path leads to the mad labyrinths of Platonism and Philosophy, and eventually dumps its benighted walkers into the Catholic Church. You pursue winning, and find where that takes you, and avoid being turned from your goal even in the name of Rationality (since rationality, after all, is not a terminal goal). There must be some way in which you will the world outside your mind to change, or you will not be able to chain your mind to the real world.
(Please tell me I just founded the LWian Sith. I have plans for hilarious initiation rituals.)