Maybe “destroying the theory” was not a good choice of words—the theory will more likely be “demoted” to the stature of “very good approximation”. Like gravity. But the distinction I’m trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of 10−200 that this would happen. You need to update it, but you don’t have to “throw it out”. But if physics says a photon should fire and it didn’t fire—then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I’ve detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.
Maybe “destroying the theory” was not a good choice of words—the theory will more likely be “demoted” to the stature of “very good approximation”. Like gravity. But the distinction I’m trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of 10−200 that this would happen. You need to update it, but you don’t have to “throw it out”. But if physics says a photon should fire and it didn’t fire—then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
And before anyone brings 0 And 1 Are Not Probabilities, remember that in the real world:
There is a probability photon could have fired and our instruments have missed it.
There is a probability that we unknowingly failed to set up or confirm the conditions that our theory required in order for the photon to fire.
We do not assign 100% probability to our theory being correct, and we can just throw it out to avoid Laplace throwing us to hell for our negative infinite score.
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I’ve detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.