People eagerly jump the gun and seize on any available reason to reject a disliked theory. That is why I gave the example of 19th-century evolutionism, to show why one should not be too quick to reject a “non-technical” theory out of hand. By the moral customs of science, 19th-century evolutionism was guilty of more than one sin. 19th-century evolutionism made no quantitative predictions. It was not readily subject to falsification. It was largely an explanation of what had already been seen. It lacked an underlying mechanism, as no one then knew about DNA. It even contradicted the 19th-century laws of physics. Yet natural selection was such an amazingly good post-facto explanation that people flocked to it, and they turned out to be right. Science, as a human endeavor, requires advance prediction. Probability theory, as math, does not distinguish between post-facto and advance prediction, because probability theory assumes that probability distributions are fixed properties of a hypothesis.
The rule about advance prediction is a rule of the social process of science—a moral custom and not a theorem. The moral custom exists to prevent human beings from making human mistakes that are hard to even describe in the language of probability theory, like tinkering after the fact with what you claim your hypothesis predicts. People concluded that 19th-century evolutionism was an excellent explanation, even if it was post-facto. That reasoning was correct as probability theory, which is why it worked despite all scientific sins. Probability theory is math. The social process of science is a set of legal conventions to keep people from cheating on the math.
and:
But the rule of advance prediction is a morality of science, not a law of probability theory. If you have already seen the data you must explain, then Science may darn you to heck, but your predicament doesn’t collapse the laws of probability theory. What does happen is that it becomes much more difficult for a hapless human to obey the laws of probability theory. When you’re deciding how to rate a hypothesis according to the Bayesian scoring rule, you need to figure out how much probability mass that hypothesis assigns to the observed outcome. If we must make our predictions in advance, then it’s easier to notice when someone is trying to claim every possible outcome as an advance prediction, using too much probability mass, being deliberately vague to avoid falsification, and so on.
Probability theory has no separate category for ‘prediction’ and ‘postdiction’.
But yes, you need to be making predictions and postdictions that actually follow from the math of your theory. Though, semitechnical theories can still build up a high enough score to beat out rival theories.
Good. The reason is explained by Yudkowsky:
and:
Probability theory has no separate category for ‘prediction’ and ‘postdiction’.
But yes, you need to be making predictions and postdictions that actually follow from the math of your theory. Though, semitechnical theories can still build up a high enough score to beat out rival theories.