The act of a single falsification shatter the whole theory seems like a calculation where the prior just gets tossed. However in most calculations the prior still affects things. If you start from somewhere and then either don’t see or see relativistic patterns for 100 years and then see a relativity violation a perfect bayesian would not end with the same end belief. Using the updated prior or the ignorant prior makes a difference and the outcome is geniunely a different degree of belief. Or I guess another way of saying that is that if you suddenly gain access to the middle-time evidence that you missed it still impacts a perfect reasoner. Gaining 100 years worth of relativity pattern increases credence for relativity even if it is already falsified.
Maybe “destroying the theory” was not a good choice of words—the theory will more likely be “demoted” to the stature of “very good approximation”. Like gravity. But the distinction I’m trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of 10−200 that this would happen. You need to update it, but you don’t have to “throw it out”. But if physics says a photon should fire and it didn’t fire—then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I’ve detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.
The act of a single falsification shatter the whole theory seems like a calculation where the prior just gets tossed. However in most calculations the prior still affects things. If you start from somewhere and then either don’t see or see relativistic patterns for 100 years and then see a relativity violation a perfect bayesian would not end with the same end belief. Using the updated prior or the ignorant prior makes a difference and the outcome is geniunely a different degree of belief. Or I guess another way of saying that is that if you suddenly gain access to the middle-time evidence that you missed it still impacts a perfect reasoner. Gaining 100 years worth of relativity pattern increases credence for relativity even if it is already falsified.
Maybe “destroying the theory” was not a good choice of words—the theory will more likely be “demoted” to the stature of “very good approximation”. Like gravity. But the distinction I’m trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of 10−200 that this would happen. You need to update it, but you don’t have to “throw it out”. But if physics says a photon should fire and it didn’t fire—then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
And before anyone brings 0 And 1 Are Not Probabilities, remember that in the real world:
There is a probability photon could have fired and our instruments have missed it.
There is a probability that we unknowingly failed to set up or confirm the conditions that our theory required in order for the photon to fire.
We do not assign 100% probability to our theory being correct, and we can just throw it out to avoid Laplace throwing us to hell for our negative infinite score.
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I’ve detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.