I’ve seen too many cases of overfitting data to trust the second theory. Trust the validated one more.
The question would be more interesting if we said that the original theory accounted for only some of the new data.
If you know a lot about the space of possible theories and “possible” experimental outcomes, you could try to compute which theory to trust, using (surprise) Bayes’ law. If it were the case that the first theory applied to only 9 of the 10 new cases, you might find parameters such that you should trust the new theory more.
In the given case, I don’t think there is any way to deduce that you should trust the 2nd theory more, unless you have some a priori measure of a theory’s likelihood, such as its complexity.
I’ve seen too many cases of overfitting data to trust the second theory. Trust the validated one more.
The question would be more interesting if we said that the original theory accounted for only some of the new data.
If you know a lot about the space of possible theories and “possible” experimental outcomes, you could try to compute which theory to trust, using (surprise) Bayes’ law. If it were the case that the first theory applied to only 9 of the 10 new cases, you might find parameters such that you should trust the new theory more.
In the given case, I don’t think there is any way to deduce that you should trust the 2nd theory more, unless you have some a priori measure of a theory’s likelihood, such as its complexity.