Hi, Anna. I definitely agree with you that two equally-good theories could agree on the results of experiments 1--20 and then disagree about the results of experiment 21. But I don’t think that they could both be best-possible theories, at least not if you fix a “good” criterion for evaluating theories with respect to given data.
What I was thinking when I claimed that in my original comment was the following:
Suppose that T1 says “result 21 will be X” and theory T2 says “result 21 will be Y”.
Then I claim that there is another theory T3, which correctly predicts results 1--20, and which also predicts “result 21 will be Z”, where Z is a less-precise description that is satisfied by both X and Y. (E.g., maybe T1 says “the ball will be red”, T2 says “the ball will be blue”, and T3 says “the ball will be visible”.)
So T3 has had the same successful predictions as T1 and T2, but it requires less information to specify (in the Kolmogorov-complexity sense), because it makes a less precise prediction about result 21.
I think that’s right, anyway. There’s definitely still some hand-waving here. I haven’t proved that a theory’s being vaguer about result 21 implies that it requires less information to specify. I think it should be true, but I lack the formal information theory to prove it.
But suppose that this can be formalized. Then there is a theory T3 that requires less information to specify than do T1 and T2, and which has performed as well as T1 and T2 on all observations so far. A “good” criterion should judge T3 to be a better theory in this case, so T1 and T2 weren’t best-possible.
Hi, Anna. I definitely agree with you that two equally-good theories could agree on the results of experiments 1--20 and then disagree about the results of experiment 21. But I don’t think that they could both be best-possible theories, at least not if you fix a “good” criterion for evaluating theories with respect to given data.
What I was thinking when I claimed that in my original comment was the following:
Suppose that T1 says “result 21 will be X” and theory T2 says “result 21 will be Y”.
Then I claim that there is another theory T3, which correctly predicts results 1--20, and which also predicts “result 21 will be Z”, where Z is a less-precise description that is satisfied by both X and Y. (E.g., maybe T1 says “the ball will be red”, T2 says “the ball will be blue”, and T3 says “the ball will be visible”.)
So T3 has had the same successful predictions as T1 and T2, but it requires less information to specify (in the Kolmogorov-complexity sense), because it makes a less precise prediction about result 21.
I think that’s right, anyway. There’s definitely still some hand-waving here. I haven’t proved that a theory’s being vaguer about result 21 implies that it requires less information to specify. I think it should be true, but I lack the formal information theory to prove it.
But suppose that this can be formalized. Then there is a theory T3 that requires less information to specify than do T1 and T2, and which has performed as well as T1 and T2 on all observations so far. A “good” criterion should judge T3 to be a better theory in this case, so T1 and T2 weren’t best-possible.