Bayesian decisions cannot be made under an inability to assign a probability distribution to the outcomes.
As mentioned, you can consider a Bayesian probability distribution of what the correct distributions will be; if you have no reason to say which state, if any, is more probable, then they have the same meta-distribution as each other: If you know that a coin is unfair, but have no information about which way it is biased, then you should divide the first bet evenly between heads and tails, (assuming logarithmic payoffs).
It might make sense to consider the Probability distribution of the fairness of the coin as a graph: the X axis, from 0-1 being the chance of each flip coming up heads, and the Y axis being the odds that the coin has that particular property; because of our prior information, there is a removable discontinuity at x=1/2. Initially, the graph is flat, but after the first flip it changes: if it came up tails, the odds of a two-headed coin are now 0, the odds of a .9999% heads coin are infinitesimal, and the odds of a tail-weighted coin are significantly greater: Having no prior information on how weighted the coin is, you could assume that all weightings (except fair) are equally likely. After the second flip, however, you have information about what the bias of the coin was- but no information about whether the bias of the coin is time-variable, such that it is always heads on prime flips, and always tails on composite flips.
If you consider that the coin could be rigged to a sequence equally likely as that the result of the flip could be randomly determined each time, then you have a problem. No information can update some specific lacks of a prior probability.
Bayesian decisions cannot be made under an inability to assign a probability distribution to the outcomes.
As mentioned, you can consider a Bayesian probability distribution of what the correct distributions will be; if you have no reason to say which state, if any, is more probable, then they have the same meta-distribution as each other: If you know that a coin is unfair, but have no information about which way it is biased, then you should divide the first bet evenly between heads and tails, (assuming logarithmic payoffs).
It might make sense to consider the Probability distribution of the fairness of the coin as a graph: the X axis, from 0-1 being the chance of each flip coming up heads, and the Y axis being the odds that the coin has that particular property; because of our prior information, there is a removable discontinuity at x=1/2. Initially, the graph is flat, but after the first flip it changes: if it came up tails, the odds of a two-headed coin are now 0, the odds of a .9999% heads coin are infinitesimal, and the odds of a tail-weighted coin are significantly greater: Having no prior information on how weighted the coin is, you could assume that all weightings (except fair) are equally likely. After the second flip, however, you have information about what the bias of the coin was- but no information about whether the bias of the coin is time-variable, such that it is always heads on prime flips, and always tails on composite flips.
If you consider that the coin could be rigged to a sequence equally likely as that the result of the flip could be randomly determined each time, then you have a problem. No information can update some specific lacks of a prior probability.