The statement, “You should expect that, on average, a test will leave your beliefs unchanged,” means that you cannot expect an unbiased test to change you beliefs in a particular direction, as is clear from the original post.
Of course you expect to hold different beliefs after the test. If you didn’t, the test would not be worth doing. But you are not more likely to end up at (100% heads, 0% tails) than (0% heads, 100% tails).
On the other hand, if you think it is more likely that you will end up at, say, (0% heads, 100% tails), then you cannot rightly claim that you currently believe the coin to be fair (your 50%, 50% estimate does not reflect your true expectations).
That said, it’s far from the most easily accessible formulation of that meaning imaginable.
I mean, sure, the future state in which half of my measure has ~1 confidence in “heads” and half my measure has ~0 confidence in “heads” is in some sense not a change from my current state where I have .5 confidence in “heads”, but that’s not the interpretation most people will adopt of “leave your beliefs unchanged.”
It seems more accessible to say that if I expect a test to update my beliefs in a particular direction, I should go ahead and update my beliefs in that direction now (and perform the test as confirmation).
Of course, this advice presumes that I won’t anchor on my new belief. Which, given that I’m human, is not a safe assumption.
I would suggest that you expect your beliefs to be changed in 100% of cases. Currently, you believe in a 50% probability. After doing the tests, we have a set of universes, some of in which you believe a 100% probability and some of in which you believe a 0% probability. Your belief changed in every single one.
X and Y can be averaged out, but belief in number X and belief in number Y don’t average out to be “belief in the average of X and Y”.
The statement, “You should expect that, on average, a test will leave your beliefs unchanged,” means that you cannot expect an unbiased test to change you beliefs in a particular direction, as is clear from the original post.
Actually you can: Shake a box with a coin you know to be biased. Before you look into the box, your belief for heads is, say, 80%. You expect that is more likely that, when you open the box, your belief will change to 100% heads rather than 0%.
I don’t think there is an useful way to patch the statement without making explicit reference to the technical definition of Bayesian belief.
The statement, “You should expect that, on average, a test will leave your beliefs unchanged,” means that you cannot expect an unbiased test to change you beliefs in a particular direction, as is clear from the original post.
Of course you expect to hold different beliefs after the test. If you didn’t, the test would not be worth doing. But you are not more likely to end up at (100% heads, 0% tails) than (0% heads, 100% tails).
On the other hand, if you think it is more likely that you will end up at, say, (0% heads, 100% tails), then you cannot rightly claim that you currently believe the coin to be fair (your 50%, 50% estimate does not reflect your true expectations).
That said, it’s far from the most easily accessible formulation of that meaning imaginable.
I mean, sure, the future state in which half of my measure has ~1 confidence in “heads” and half my measure has ~0 confidence in “heads” is in some sense not a change from my current state where I have .5 confidence in “heads”, but that’s not the interpretation most people will adopt of “leave your beliefs unchanged.”
It seems more accessible to say that if I expect a test to update my beliefs in a particular direction, I should go ahead and update my beliefs in that direction now (and perform the test as confirmation).
Of course, this advice presumes that I won’t anchor on my new belief. Which, given that I’m human, is not a safe assumption.
I would suggest that you expect your beliefs to be changed in 100% of cases. Currently, you believe in a 50% probability. After doing the tests, we have a set of universes, some of in which you believe a 100% probability and some of in which you believe a 0% probability. Your belief changed in every single one.
X and Y can be averaged out, but belief in number X and belief in number Y don’t average out to be “belief in the average of X and Y”.
Actually you can:
Shake a box with a coin you know to be biased. Before you look into the box, your belief for heads is, say, 80%. You expect that is more likely that, when you open the box, your belief will change to 100% heads rather than 0%.
I don’t think there is an useful way to patch the statement without making explicit reference to the technical definition of Bayesian belief.