Eliezer, I have no argument with the Bayesian use of the probability calculus and so I do not side with those who say “there is no rational way to manage your uncertainty”, but I think I probably do have an argument with the insistence that it is the one true way. None of the problems you have so far outlined, including the coin one, really seem to doom either frequentism specifically, or more generally, an objective account of probability. I agree with this:
Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong. Maybe you’re holding the coin in such a way that it’s just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you.
but I question whether it really captures the frequentist position. To address the specifics, you seem to be talking about how the coin is held in a specific concrete toss. But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses. Alternatively, you might be talking about an infinitely repeated experiment in which the coin is tossed “in such a way”, but here too I see no problem for the frequentists. Since the way of holding the coin is part of the experiment, then in this case they will predict a long term frequency of mostly heads. So they won’t get this one wrong.
“But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses.”
These infinite sequences never exist, and very often they don’t even exist approximately. We only observe finite numbers of events. I think this is one of the things Jaynes had in mind when he talked about the proper handling of infinities—you should start by analyzing the finite case, and look for a well-defined limit as n increases without bound. Unfortunately, frequentist statistics starts with the limit at infinity.
As an example of how these limiting frequencies taken over infinite sequences often make no sense in real-world situations, consider statistical models of human language, such as are used in automatic speech recognition. Such models assign a prior probability to each possible utterance a person could make. What does it mean, from a frequentist standpoint, to say that there is a probability of 1e-100 that a person will say “The tomatoe flew dollars down the pipe”? There haven’t been 1e100 separate utterances by all human beings in all of human history, so how could a probability of 1e-100 possibly correspond to some sort of long-run frequency?
Eliezer, I have no argument with the Bayesian use of the probability calculus and so I do not side with those who say “there is no rational way to manage your uncertainty”, but I think I probably do have an argument with the insistence that it is the one true way. None of the problems you have so far outlined, including the coin one, really seem to doom either frequentism specifically, or more generally, an objective account of probability. I agree with this:
but I question whether it really captures the frequentist position. To address the specifics, you seem to be talking about how the coin is held in a specific concrete toss. But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses. Alternatively, you might be talking about an infinitely repeated experiment in which the coin is tossed “in such a way”, but here too I see no problem for the frequentists. Since the way of holding the coin is part of the experiment, then in this case they will predict a long term frequency of mostly heads. So they won’t get this one wrong.
“But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses.”
These infinite sequences never exist, and very often they don’t even exist approximately. We only observe finite numbers of events. I think this is one of the things Jaynes had in mind when he talked about the proper handling of infinities—you should start by analyzing the finite case, and look for a well-defined limit as n increases without bound. Unfortunately, frequentist statistics starts with the limit at infinity.
As an example of how these limiting frequencies taken over infinite sequences often make no sense in real-world situations, consider statistical models of human language, such as are used in automatic speech recognition. Such models assign a prior probability to each possible utterance a person could make. What does it mean, from a frequentist standpoint, to say that there is a probability of 1e-100 that a person will say “The tomatoe flew dollars down the pipe”? There haven’t been 1e100 separate utterances by all human beings in all of human history, so how could a probability of 1e-100 possibly correspond to some sort of long-run frequency?