Beliefs ought to bob around in the stream of evidence as a random walk without trend.
Unless you have actually used evidence to create a belief. If previous evidence actually supports a belief, unless you believe the real world is random, further evidence is more likely to support the belief than not.
The fact that the sun rose on June 3rd 1978 and the fact that the sun rose on February 16th 1860 are both evidence that the sun will rise in the future.
Those are not “many reasons”; that’s one reason repeated many times.
Unless you have actually used evidence to create a belief. If previous evidence actually supports a belief, unless you believe the real world is random, further evidence is more likely to support the belief than not.
That turns out not to be the case, though the reason why can initially seem unintuitive. If you have fully used all the information from piece of evidence A, that will include the fact that correlated piece of evidence B will be more likely to come up. This means that B will sway your beliefs less, because it is not a surprise. Contrariwise, anticorrelated piece of evidence C will be less likely to come up, and hence be more of a surprise if it does, and move your beliefs further. Averaging over all possible new pieces of evidence, and how likely they are, it has to be wash—if it’s not a wash, then you should have already updated to the point that would be your average expected update.
(Note that for something like parameter estimation, where rather than a single belief, you use probability densities, each point will on average stay the same for any new piece of evidence, but which parts of the density go up and which go down, and by how much are highly correlated.)
The impression I get of the difference here between a “belief” and a “hypothesis” is something like this:
I have the belief that the sun will continue to rise for a long long time.
This is probably “true.”
I have the hypothesis that the sun will rise tomorrow morning with probability .999999
Conservation of expected evidence requires that in pure Bayesian fashion, if it does rise tomorrow my probability will rise to .9999991 and if it doesn’t it will shoot down to .3 or something in a way that makes the view of all possible shifts a random walk.
That is, if your hypothesis is “true” you have great confidence, if it is true too often you are underconfident and the hypothesis has an issue.
Unless you have actually used evidence to create a belief. If previous evidence actually supports a belief, unless you believe the real world is random, further evidence is more likely to support the belief than not.
Those are not “many reasons”; that’s one reason repeated many times.
That turns out not to be the case, though the reason why can initially seem unintuitive. If you have fully used all the information from piece of evidence A, that will include the fact that correlated piece of evidence B will be more likely to come up. This means that B will sway your beliefs less, because it is not a surprise. Contrariwise, anticorrelated piece of evidence C will be less likely to come up, and hence be more of a surprise if it does, and move your beliefs further. Averaging over all possible new pieces of evidence, and how likely they are, it has to be wash—if it’s not a wash, then you should have already updated to the point that would be your average expected update.
(Note that for something like parameter estimation, where rather than a single belief, you use probability densities, each point will on average stay the same for any new piece of evidence, but which parts of the density go up and which go down, and by how much are highly correlated.)
By “belief”, grandparent means probability of a hypothesis, which does bob around without trend in a perfect Bayesian reasoner.
The impression I get of the difference here between a “belief” and a “hypothesis” is something like this:
I have the belief that the sun will continue to rise for a long long time.
This is probably “true.”
I have the hypothesis that the sun will rise tomorrow morning with probability .999999
Conservation of expected evidence requires that in pure Bayesian fashion, if it does rise tomorrow my probability will rise to .9999991 and if it doesn’t it will shoot down to .3 or something in a way that makes the view of all possible shifts a random walk.
That is, if your hypothesis is “true” you have great confidence, if it is true too often you are underconfident and the hypothesis has an issue.