Sure, but what I mean is that this is hard to do for hypothesis-location, since post-update you still have the hypothesis-locating information, and there’s some chance that your “explaining away” was itself incorrect (or your memory is bad, you have bugs in your code...).
For an extreme case, take Donald’s example, where the initial prior would be 8,000,000 bits against. Locating the hypothesis there gives you ~8,000,000 bits of evidence. The amount you get in an “explaining away” process is bounded by your confidence in the new evidence. How sure are you that you correctly observed and interpreted the “explaining away” evidence? Maybe you’re 20 bits sure; perhaps 40 bits sure. You’re not 8,000,000 bits sure.
Then let’s say you’ve updated down quite a few times, but not yet close to the initial prior value. For the next update, how sure are you that the stored value that you’ll be using as your new prior is correct? If you’re human, perhaps you misremembered; if a computer system, perhaps there’s a bug... Below a certain point, the new probability you arrive at will be dominated by contributions from weird bugs, misrememberings etc. This remains true until/unless you lose the information describing the hypothesis itself.
I’m not clear how much this is a practical problem—I agree you can update the odds of a hypothesis down to no-longer-worthy-of-consideration. In general, I don’t think you can get back to the original prior without making invalid assumptions (e.g. zero probability of a bug/hack/hoax...), or losing the information that picks out the hypothesis.
Sure, but what I mean is that this is hard to do for hypothesis-location, since post-update you still have the hypothesis-locating information, and there’s some chance that your “explaining away” was itself incorrect (or your memory is bad, you have bugs in your code...).
For an extreme case, take Donald’s example, where the initial prior would be 8,000,000 bits against.
Locating the hypothesis there gives you ~8,000,000 bits of evidence. The amount you get in an “explaining away” process is bounded by your confidence in the new evidence. How sure are you that you correctly observed and interpreted the “explaining away” evidence? Maybe you’re 20 bits sure; perhaps 40 bits sure. You’re not 8,000,000 bits sure.
Then let’s say you’ve updated down quite a few times, but not yet close to the initial prior value. For the next update, how sure are you that the stored value that you’ll be using as your new prior is correct? If you’re human, perhaps you misremembered; if a computer system, perhaps there’s a bug...
Below a certain point, the new probability you arrive at will be dominated by contributions from weird bugs, misrememberings etc.
This remains true until/unless you lose the information describing the hypothesis itself.
I’m not clear how much this is a practical problem—I agree you can update the odds of a hypothesis down to no-longer-worthy-of-consideration. In general, I don’t think you can get back to the original prior without making invalid assumptions (e.g. zero probability of a bug/hack/hoax...), or losing the information that picks out the hypothesis.