One of the core aims of the philosophy of probability is to explain the relationship between frequency and probability. The frequentist proposes identity as the relationship. This use of identity is highly dubious. We know how to check for identity between numbers, or even how to check for the weaker copula relation between particular objects; but how would we test the identity of frequency and probability? It is not immediately obvious that there is some simple value out there which is modeled by probability, like position and mass are values that are modeled by Newton’s Principia. You can actually check if density * volume = mass, by taking separate measurements of mass, density and volume, but what would you measure to check a frequency against a probability?
There are certain appeals to frequentest philosophy: we would like to say that if a bag has 100 balls in it, only 1 of which is white, then the probability of drawing the white ball is 1⁄100, and that if we take a non-white ball out, the probability of drawing the white ball is now 1⁄99. Frequentism would make the philosophical justification of that inference trivial. But of course, anything a frequentist can do, a Bayesian can do (better). I mean that literally: it’s the stronger magic.
A Subjective Bayesian, more or less, says that the reason frequencies are related to probabilities is because when you learn a frequency you thereby learn a fact about the world, and one must update one’s degrees of belief on every available fact. The subjective Bayesian actually uses the copula in another strange way:
Probability is subjective degree of belief.
and subjective Bayesians also claim:
Probabilities are not in the world, they are in your mind.
These two statements are brilliantly championed in Probability is Subjectively Objective. But ultimately, the formalism which I would like to suggest denies both of these statements. Formalists do not ontologically commit themselves to probabilities, just as they do not say that numbers exist; hence we don’t allocate probabilities in the mind or anywhere else; we only commit ourselves to number theory, and probability theory. Mathematical theories are simply repeatable processes which construct certain sequences of squiggles called “theorems”, by changing the squiggles of other theorems, according to certain rules called “inferences”. Inferences always take as input certain sequences of squiggles called premises, and output a sequence of squiggles called the conclusion. The only thing an inference ever does is add squiggles to a theorem, take away squiggles from a theorem, or both. It turns out that these squiggle sequences mixed with inferences can talk about almost anything, certainly any computable thing. The formalist does not need to ontologically commit to numbers to assert that “There is a prime greater than 10000.”, even though “There is x such that” is a flat assertion of existence; because for the formalist “There is a prime greater than 10000.” simply means that number theory contains a theorem which is interpreted as “there is a prime greater than 10000.” When you say a mathematical fact in English, you are interpreting a theorem from a formal theory. If under your suggested interpretation, all of the theorems of the theory are true, then whatever system/mechanism your interpretation of the theory talks about, is said to be modeled by the theory.
So, what is the relation between frequency and probability proposed by formalism? Theorems of probability, may be interpreted as true statements about frequencies, when you assign certain squiggles certain words and claim the resulting natural language sentence. Or for short we can say: “Probability theory models frequency.” It is trivial to show that Komolgorov models frequency, since it also models fractions; it is an algebra after all. More interestingly, probability theory models rational distributions of subjective degree of believe, and the optimal updating of degree of believe given new information. This is somewhat harder to show; dutch-book arguments do nicely to at least provide some intuitive understanding of the relation between degree of belief, betting, and probability, but there is still work to be done here. If Bayesian probability theory really does model rational belief, which many believe it does, then that is likely the most interesting thing we are ever going to be able to model with probability. But probability theory also models spatial measurement? Why not add the position that probability is volume to the debating lines of the philosophy of probability?
Why are frequentism’s and subjective Bayesianism’s misuses of the copula not as obvious as volumeism’s? This is because what the Bayesian and frequentest are really arguing about is statistical methodology, they’ve just disguised the argument as an argument about what probability is. Your interpretation of probability theory will determine how you model uncertainty, and hence determine your statistical methodology. Volumeism cannot handle uncertainty in any obvious way; however, the Bayesian and frequentest interpretations of probability theory, imply two radically different ways of handling uncertainty.
The easiest way to understand the philosophical dispute between the frequentist and the subjective Bayesian is to look at the classic biased coin:
A subjective Bayesian and a frequentist are at a bar, and the bartender (being rather bored) tells the two that he has a biased coin, and asks them “what is the probability that the coin will come up heads on the first flip?” The frequentist says that for the coin to be biased means for it not have a 50% chance of coming up heads, so all we know is that it has a probability that is not equal 50%. The Bayesain says that that any evidence I have for it coming up heads, is also evidence for it coming up tails, since I know nothing about one outcome, that doesn’t hold for its negation, and the only value which represents that symmetry is 50%.
I ask you. What is the difference between these two, and the poor souls engaged in endless debate over realism about sound in the beginning of Making Beliefs Pay Rent?
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”
One is being asked: “Are there pressure waves in the air if we aren’t around?” the other is being asked: “Are there auditory experiences if we are not around?” The problem is that “sound” is being used to stand for both “auditory experience” and “pressure waves through air”. They are both giving the right answers to these respective questions. But they are failing to Replace the Symbol with the Substance and they’re using one word with two different meanings in different places. In the exact same way, “probability” is being used to stand for both “frequency of occurrence” and “rational degree of belief” in the dispute between the Bayesian and the frequentist. The correct answer to the question: “If the coin is flipped an infinite amount of times, how frequently would we expect to see a coin that landed on heads?” is “All we know, is that it wouldn’t be 50%.” because that is what it means for the coin to be biased. The correct answer to the question: “What is the optimal degree of belief that we should assign to the first trial being heads?” is “Precisely 50%.”, because of the symmetrical evidential support the results get from our background information. How we should actually model the situation as statisticians depends on our goal. But remember that Bayesianism is the stronger magic, and the only contender for perfection in the competition.
For us formalists, probabilities are not anywhere. We do not even believe in probability technically, we only believe in probability theory. The only coherent uses of “probability” in natural language are purely syncategorematic. We should be very careful when we colloquially use “probability” as a noun or verb, and be very careful and clear about what we mean by this word play. Probability theory models many things, including degree of belief, and frequency. Whatever we may learn about rationality, frequency, measure, or any of the other mechanisms that probability models, through the interpretation of probability theorems, we learn because probability theory is isomorphic to those mechanisms. When you use the copula like the frequentist or the subjective Bayesian, it makes it hard to notice that probability theory modeling both frequency and degree of belief, is not a contradiction. If we use “is” instead of “model”, it is clear that frequency is not degree of belief, so if probability is belief, then it is not frequency. Though frequency is not degree of belief, frequency does model degree of belief, so if probability models frequency, it must also model degree of belief.
(Subjective Bayesianism vs. Frequentism) VS. Formalism
One of the core aims of the philosophy of probability is to explain the relationship between frequency and probability. The frequentist proposes identity as the relationship. This use of identity is highly dubious. We know how to check for identity between numbers, or even how to check for the weaker copula relation between particular objects; but how would we test the identity of frequency and probability? It is not immediately obvious that there is some simple value out there which is modeled by probability, like position and mass are values that are modeled by Newton’s Principia. You can actually check if density * volume = mass, by taking separate measurements of mass, density and volume, but what would you measure to check a frequency against a probability?
There are certain appeals to frequentest philosophy: we would like to say that if a bag has 100 balls in it, only 1 of which is white, then the probability of drawing the white ball is 1⁄100, and that if we take a non-white ball out, the probability of drawing the white ball is now 1⁄99. Frequentism would make the philosophical justification of that inference trivial. But of course, anything a frequentist can do, a Bayesian can do (better). I mean that literally: it’s the stronger magic.
A Subjective Bayesian, more or less, says that the reason frequencies are related to probabilities is because when you learn a frequency you thereby learn a fact about the world, and one must update one’s degrees of belief on every available fact. The subjective Bayesian actually uses the copula in another strange way:
and subjective Bayesians also claim:
These two statements are brilliantly championed in Probability is Subjectively Objective. But ultimately, the formalism which I would like to suggest denies both of these statements. Formalists do not ontologically commit themselves to probabilities, just as they do not say that numbers exist; hence we don’t allocate probabilities in the mind or anywhere else; we only commit ourselves to number theory, and probability theory. Mathematical theories are simply repeatable processes which construct certain sequences of squiggles called “theorems”, by changing the squiggles of other theorems, according to certain rules called “inferences”. Inferences always take as input certain sequences of squiggles called premises, and output a sequence of squiggles called the conclusion. The only thing an inference ever does is add squiggles to a theorem, take away squiggles from a theorem, or both. It turns out that these squiggle sequences mixed with inferences can talk about almost anything, certainly any computable thing. The formalist does not need to ontologically commit to numbers to assert that “There is a prime greater than 10000.”, even though “There is x such that” is a flat assertion of existence; because for the formalist “There is a prime greater than 10000.” simply means that number theory contains a theorem which is interpreted as “there is a prime greater than 10000.” When you say a mathematical fact in English, you are interpreting a theorem from a formal theory. If under your suggested interpretation, all of the theorems of the theory are true, then whatever system/mechanism your interpretation of the theory talks about, is said to be modeled by the theory.
So, what is the relation between frequency and probability proposed by formalism? Theorems of probability, may be interpreted as true statements about frequencies, when you assign certain squiggles certain words and claim the resulting natural language sentence. Or for short we can say: “Probability theory models frequency.” It is trivial to show that Komolgorov models frequency, since it also models fractions; it is an algebra after all. More interestingly, probability theory models rational distributions of subjective degree of believe, and the optimal updating of degree of believe given new information. This is somewhat harder to show; dutch-book arguments do nicely to at least provide some intuitive understanding of the relation between degree of belief, betting, and probability, but there is still work to be done here. If Bayesian probability theory really does model rational belief, which many believe it does, then that is likely the most interesting thing we are ever going to be able to model with probability. But probability theory also models spatial measurement? Why not add the position that probability is volume to the debating lines of the philosophy of probability?
Why are frequentism’s and subjective Bayesianism’s misuses of the copula not as obvious as volumeism’s? This is because what the Bayesian and frequentest are really arguing about is statistical methodology, they’ve just disguised the argument as an argument about what probability is. Your interpretation of probability theory will determine how you model uncertainty, and hence determine your statistical methodology. Volumeism cannot handle uncertainty in any obvious way; however, the Bayesian and frequentest interpretations of probability theory, imply two radically different ways of handling uncertainty.
The easiest way to understand the philosophical dispute between the frequentist and the subjective Bayesian is to look at the classic biased coin:
A subjective Bayesian and a frequentist are at a bar, and the bartender (being rather bored) tells the two that he has a biased coin, and asks them “what is the probability that the coin will come up heads on the first flip?” The frequentist says that for the coin to be biased means for it not have a 50% chance of coming up heads, so all we know is that it has a probability that is not equal 50%. The Bayesain says that that any evidence I have for it coming up heads, is also evidence for it coming up tails, since I know nothing about one outcome, that doesn’t hold for its negation, and the only value which represents that symmetry is 50%.
I ask you. What is the difference between these two, and the poor souls engaged in endless debate over realism about sound in the beginning of Making Beliefs Pay Rent?
One is being asked: “Are there pressure waves in the air if we aren’t around?” the other is being asked: “Are there auditory experiences if we are not around?” The problem is that “sound” is being used to stand for both “auditory experience” and “pressure waves through air”. They are both giving the right answers to these respective questions. But they are failing to Replace the Symbol with the Substance and they’re using one word with two different meanings in different places. In the exact same way, “probability” is being used to stand for both “frequency of occurrence” and “rational degree of belief” in the dispute between the Bayesian and the frequentist. The correct answer to the question: “If the coin is flipped an infinite amount of times, how frequently would we expect to see a coin that landed on heads?” is “All we know, is that it wouldn’t be 50%.” because that is what it means for the coin to be biased. The correct answer to the question: “What is the optimal degree of belief that we should assign to the first trial being heads?” is “Precisely 50%.”, because of the symmetrical evidential support the results get from our background information. How we should actually model the situation as statisticians depends on our goal. But remember that Bayesianism is the stronger magic, and the only contender for perfection in the competition.
For us formalists, probabilities are not anywhere. We do not even believe in probability technically, we only believe in probability theory. The only coherent uses of “probability” in natural language are purely syncategorematic. We should be very careful when we colloquially use “probability” as a noun or verb, and be very careful and clear about what we mean by this word play. Probability theory models many things, including degree of belief, and frequency. Whatever we may learn about rationality, frequency, measure, or any of the other mechanisms that probability models, through the interpretation of probability theorems, we learn because probability theory is isomorphic to those mechanisms. When you use the copula like the frequentist or the subjective Bayesian, it makes it hard to notice that probability theory modeling both frequency and degree of belief, is not a contradiction. If we use “is” instead of “model”, it is clear that frequency is not degree of belief, so if probability is belief, then it is not frequency. Though frequency is not degree of belief, frequency does model degree of belief, so if probability models frequency, it must also model degree of belief.