On the other hand, it’s evidence to me that we’re talking about different types of minds. Have we identified whether this aspect of frequentism is a choice, or just the way their minds work?
I’m a frequentist, I think, and when I interrogate my intuition about whether 50% heads / 50% tails is a property of a fair coin, it returns ‘yes’. However, I understand that this property is an abstract one, and my intuition doesn’t make any different empirical predictions about the coin than a Bayesian would. Thus, what difference does it make if I find it natural to assign this property?
In other words, in what (empirically measurable!) sense could it be crazy?
Well, the immediate objection is that if you hand the coin to a skilled tosser, the frequencies of heads and tails in the tosses can be markedly different than 50%. If you put this probability in the coin, than you really aren’t modeling things in a manner that accords with results. You can, of course talk instead about a procedure of coin-tossing, that naturally has to specify the coin as well.
Of course, that merely pushes things back a level. If you completely specify the tossing procedure (people have built coin-tossing machines), then you can repeatedly get 100%/0% splits by careful tuning. If you don’t know whether it is tuned to 100% heads or 100% tails, is it still useful to describe this situation probabilistically? A hard-core Frequentist “should” say no, as everything is deterministic. Most people are willing to allow that 50% probability is a reasonable description of the situation. To the extent that you do allow this, you are Bayesian. To the extent that you don’t, you’re missing an apparently valuable technique.
The frequentist can account for the biased toss and determinism, in various ways.
My preferred reply would be that the 50⁄50 is a property of the symmetry of the coin. (Of course, it’s a property of an idealized coin. Heck, a real coin can land balanced on its edge.) If someone tosses the coin in a way that biases the coin, she has actually broken the symmetry in some way with her initial conditions. In particular, the tosser must begin with the knowledge of which way she is holding the coin—if she doesn’t know, she can’t bias the outcome of the coin.
I understand that Bayesian’s don’t tend to abstract things to their idealized forms … I wonder to what extent Frequentism does this necessarily. (What is the relationship between Frequentism and Platonism?)
The frequentist can account for these things, in various ways.
Oh, absolutely. The typical way is choosing some reference class of idealized experiments that could be done. Of course, the right choice of reference class is just as arbitrary as the right choice of Bayesian prior.
My preferred reply would be that the 50⁄50 is a property of the symmetry of the coin.
Whereas the Bayesian would argue that the 50⁄50 property is a symmetry about our knowledge of the coin—even a coin that you know is biased, but that you have no evidence for which way it is biased.
I understand that Bayesian’s don’t tend to abstract things to their idealized forms
Well, I don’t think Bayesians are particularly reluctant to look at idealized forms, it’s just that when you can make your model more closely match the situation (without incurring horrendous calculational difficulties) there is a benefit to do so.
And of course, the question is “which idealized form?” There are many ways to idealize almost any situation, and I think talking about “the” idealized form can be misleading. Talking about a “fair coin” is already a serious abstraction and idealization, but it’s one that has, of course, proven quite useful.
I wonder to what extent Frequentism does this necessarily. (What is the relationship between Frequentism and Platonism?)
In a nutshell: Bayesian statistics is about making probability statements, frequentist statistics is about evaluating probability statements.
So, speaking very loosely, Bayesianism is to science, inductive logic, and Aristotelianism as frequentism is to math, deductive logic, and Platonism. That is, Bayesianism is synthesis; frequentism is analysis.
On the other hand, it’s evidence to me that we’re talking about different types of minds. Have we identified whether this aspect of frequentism is a choice, or just the way their minds work?
I’m a frequentist, I think, and when I interrogate my intuition about whether 50% heads / 50% tails is a property of a fair coin, it returns ‘yes’. However, I understand that this property is an abstract one, and my intuition doesn’t make any different empirical predictions about the coin than a Bayesian would. Thus, what difference does it make if I find it natural to assign this property?
In other words, in what (empirically measurable!) sense could it be crazy?
http://comptop.stanford.edu/preprints/heads.pdf
Well, the immediate objection is that if you hand the coin to a skilled tosser, the frequencies of heads and tails in the tosses can be markedly different than 50%. If you put this probability in the coin, than you really aren’t modeling things in a manner that accords with results. You can, of course talk instead about a procedure of coin-tossing, that naturally has to specify the coin as well.
Of course, that merely pushes things back a level. If you completely specify the tossing procedure (people have built coin-tossing machines), then you can repeatedly get 100%/0% splits by careful tuning. If you don’t know whether it is tuned to 100% heads or 100% tails, is it still useful to describe this situation probabilistically? A hard-core Frequentist “should” say no, as everything is deterministic. Most people are willing to allow that 50% probability is a reasonable description of the situation. To the extent that you do allow this, you are Bayesian. To the extent that you don’t, you’re missing an apparently valuable technique.
The frequentist can account for the biased toss and determinism, in various ways.
My preferred reply would be that the 50⁄50 is a property of the symmetry of the coin. (Of course, it’s a property of an idealized coin. Heck, a real coin can land balanced on its edge.) If someone tosses the coin in a way that biases the coin, she has actually broken the symmetry in some way with her initial conditions. In particular, the tosser must begin with the knowledge of which way she is holding the coin—if she doesn’t know, she can’t bias the outcome of the coin.
I understand that Bayesian’s don’t tend to abstract things to their idealized forms … I wonder to what extent Frequentism does this necessarily. (What is the relationship between Frequentism and Platonism?)
Oh, absolutely. The typical way is choosing some reference class of idealized experiments that could be done. Of course, the right choice of reference class is just as arbitrary as the right choice of Bayesian prior.
Whereas the Bayesian would argue that the 50⁄50 property is a symmetry about our knowledge of the coin—even a coin that you know is biased, but that you have no evidence for which way it is biased.
Well, I don’t think Bayesians are particularly reluctant to look at idealized forms, it’s just that when you can make your model more closely match the situation (without incurring horrendous calculational difficulties) there is a benefit to do so.
And of course, the question is “which idealized form?” There are many ways to idealize almost any situation, and I think talking about “the” idealized form can be misleading. Talking about a “fair coin” is already a serious abstraction and idealization, but it’s one that has, of course, proven quite useful.
That’s a very interesting question.
To quote from Gelman’s rejoinder that Phil Goetz mentioned,
So, speaking very loosely, Bayesianism is to science, inductive logic, and Aristotelianism as frequentism is to math, deductive logic, and Platonism. That is, Bayesianism is synthesis; frequentism is analysis.
Interesting! That makes a lot of sense to me, because I had already made connections between science and Aristotelianism, pure math and Platonism.