I am a Platonist about mathematics by inclination, though I strongly suspect that this inclination is one that I should resist taking too seriously. I am a Bayesian about proability (at least in the following sense: it seems to me that the Bayesian approach subsumes the others, when they are applied correctly). I am mostly Bayesian about statistics, but don’t see any reason why you shouldn’t compute confidence intervals and unbiased estimators if you want to. I don’t think “Platonist” and “frequentist” are at all the same thing, so I don’t see any of the above as indicating that I’m (inclined to be) Platonist about some things but not about others.
[...] the fundamental truth [...]
This seems to have prompted a debate about whether The Fundamental Truth is one about the general propensities of the coin, or one about what will happen the next time it’s flipped. I don’t see why there should be exactly one Fundamental Truth about the coin; I’d have thought there would be either none or many depending on what sort of thing one wishes to count as a “fundamental truth”.
Anyway: imagine a precision robot coin-flipper. I hope it’s clear that with such a device one could arrange that the next million flips of the coin all come up heads, and then melt it down. So whatever “fundamental truth” there might be about What The Coin Will Do has to be relative to some model of what’s going to be done to it. The point of coin-flipping is that it’s a sort of randomness magnifier: small variations in what you do to it make bigger differences to what it does, so a small patch of possibility-space gets turned into a somewhat-uniform sampling of a larger patch (caution: Liouville, volume conservation, etc.). And the “fundamental truth” about the coin that you’re appealing to is that, plus what it implies about its ability to turn kinda-sorta-slightly-random-ish coin flipping actions into much more random-ish outcomes. To turn that into an actual expectation of (more or less) independent p=1/2 Bernoulli trials, you need to add some assumption about how people actually flip coins, and then the magic of physics means that a wide range of such assumptions all lead to very similar-looking conclusions about what the outcomes are likely to look like.
In other words: an accurate version of the frequentist way of looking at the coin’s behaviour starts with some assumption (wherever it happens to come from) about how coins actually get flipped, mixes that with some (not really probabilistic) facts about the coin, and ends up with a conclusion about what the coin is likely to do when flipped, which doesn’t depend too sensitively on that assumption we made.
Whereas a Bayesian way of looking at it starts with some assumption (wherever it happens to come from) about what happens when coins get flipped, mixes that with some (not really probabilistic) facts about what the coin has been observed to do and perhaps a bit of physics, and ends up with a conclusion about what the coin is likely to do when flipped in the future, which doesn’t depend too sensitively on that assumption we made.
Clearly the philosophical differences here are irreconcilable...
I am a Platonist about mathematics by inclination, though I strongly suspect that this inclination is one that I should resist taking too seriously. I am a Bayesian about proability (at least in the following sense: it seems to me that the Bayesian approach subsumes the others, when they are applied correctly). I am mostly Bayesian about statistics, but don’t see any reason why you shouldn’t compute confidence intervals and unbiased estimators if you want to. I don’t think “Platonist” and “frequentist” are at all the same thing, so I don’t see any of the above as indicating that I’m (inclined to be) Platonist about some things but not about others.
This seems to have prompted a debate about whether The Fundamental Truth is one about the general propensities of the coin, or one about what will happen the next time it’s flipped. I don’t see why there should be exactly one Fundamental Truth about the coin; I’d have thought there would be either none or many depending on what sort of thing one wishes to count as a “fundamental truth”.
Anyway: imagine a precision robot coin-flipper. I hope it’s clear that with such a device one could arrange that the next million flips of the coin all come up heads, and then melt it down. So whatever “fundamental truth” there might be about What The Coin Will Do has to be relative to some model of what’s going to be done to it. The point of coin-flipping is that it’s a sort of randomness magnifier: small variations in what you do to it make bigger differences to what it does, so a small patch of possibility-space gets turned into a somewhat-uniform sampling of a larger patch (caution: Liouville, volume conservation, etc.). And the “fundamental truth” about the coin that you’re appealing to is that, plus what it implies about its ability to turn kinda-sorta-slightly-random-ish coin flipping actions into much more random-ish outcomes. To turn that into an actual expectation of (more or less) independent p=1/2 Bernoulli trials, you need to add some assumption about how people actually flip coins, and then the magic of physics means that a wide range of such assumptions all lead to very similar-looking conclusions about what the outcomes are likely to look like.
In other words: an accurate version of the frequentist way of looking at the coin’s behaviour starts with some assumption (wherever it happens to come from) about how coins actually get flipped, mixes that with some (not really probabilistic) facts about the coin, and ends up with a conclusion about what the coin is likely to do when flipped, which doesn’t depend too sensitively on that assumption we made.
Whereas a Bayesian way of looking at it starts with some assumption (wherever it happens to come from) about what happens when coins get flipped, mixes that with some (not really probabilistic) facts about what the coin has been observed to do and perhaps a bit of physics, and ends up with a conclusion about what the coin is likely to do when flipped in the future, which doesn’t depend too sensitively on that assumption we made.
Clearly the philosophical differences here are irreconcilable...