This is a standard objection, and one that used to convince me. But I really can’t see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier?
Ah, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian “degree of belief”.
To get a “degree of belief” SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate.
If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating—but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates!
I think this argument extends it to G, and hence to universal SIA.
Thanks, that’s helpful. Though intuitively, it doesn’t seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?)
What do you mean by “knowledge of chances”? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be.
I’m just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.
By “a coin is fair”, do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails?
Why should that matter? In the actual coin flip whose result we care about, the whole system is not “fair”, there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one.
What if I tell you the coin is not fair, but I don’t have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions.
For one thing, it’ll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it’s fair, we’ll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.
This is a standard objection, and one that used to convince me. But I really can’t see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier?
No, I was suggesting that the difference is between F and SIA.
Ah, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian “degree of belief”.
To get a “degree of belief” SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate.
If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating—but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates!
I think this argument extends it to G, and hence to universal SIA.
Thanks, that’s helpful. Though intuitively, it doesn’t seem so unreasonable to treat a credal state due to knowledge of chances differently from one that instead reflects total ignorance. (Even Bayesians want some way to distinguish these, right?)
What do you mean by “knowledge of chances”? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be.
I’m just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.
By “a coin is fair”, do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails?
Why should that matter? In the actual coin flip whose result we care about, the whole system is not “fair”, there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one.
What if I tell you the coin is not fair, but I don’t have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions.
For one thing, it’ll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it’s fair, we’ll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.
That is true, but in the scenario, you never learn the result of a coin flip to update on. So why does it matter?