There does not appear to be any a priori ordering of probability by complexity value, and unless you assume this the conclusion is improperly drawn. If you assume it, the argument is circular.
If the issue I’m missing is that the conclusion should read:
hypotheses with a greater complexity value tend to have a lower prior probability
Then that would explain my confusion. It does make the conclusion pretty weak, as it’s unclear if infinite possibilities even exist. If they don’t, the proof collapses.
I’d written out more reasoning, but I think this works best via counterexample. Consider:
X1, Xm, Xn, ... z1, zn, zz
A. .05, .04, . 03, …....… 1, m, n,
B. .03, .04, .05, …....… n, n, n
n>m and for all Xa, a not equal to n or m, Xa<.03.
I may misunderstand the system you’re using, but it looks to me like there is nothing preventing n and m from taking any arbitrary value. In other words, any hypothesis of any arbitrary complexity could conceivably have a higher average probability than X1, principally because our definition of complexity is completely undefined. At some point, hypotheses must start declining in probability, but that does not prevent there from being a spike at complexity level 1,000,000, and it does not prevent complexity level 5 being more likely than complexity level 1.
Assume … is identical in all cases, and each member of … < x1, x2, and x3. If I understand the system correctly, in this case, hypotheses of complexity 2 would have a higher probability than those with complexity 1, which contradicts the conclusion, which is that hypotheses with a greater complexity value have a lower prior probability. Because the definition of complexity is vacuous, this could start at any particular n.
In something more like natural language, simply because x1 is higher than some infinite subset of x’s, that does not mean it is higher than all X’s. Xn could be greater than X1 for any arbitrary value of n. I don’t see anything that says x1 must be the greatest x, and if it is not, the conclusion appears false. There is some level of complexity that is, on average, better than every other, but there is no particular reason it must be the lowest level of complexity. Z1 could be equal to just about any natural number.
I would be totally unsurprised if I missed something here, so please correct me if I have misunderstood the proof.
There does not appear to be any a priori ordering of probability by complexity value, and unless you assume this the conclusion is improperly drawn. If you assume it, the argument is circular.
If the issue I’m missing is that the conclusion should read:
Then that would explain my confusion. It does make the conclusion pretty weak, as it’s unclear if infinite possibilities even exist. If they don’t, the proof collapses.
I’d written out more reasoning, but I think this works best via counterexample. Consider:
A. .05, .04, . 03, …....… 1, m, n,
B. .03, .04, .05, …....… n, n, n
n>m and for all Xa, a not equal to n or m, Xa<.03.
I may misunderstand the system you’re using, but it looks to me like there is nothing preventing n and m from taking any arbitrary value. In other words, any hypothesis of any arbitrary complexity could conceivably have a higher average probability than X1, principally because our definition of complexity is completely undefined. At some point, hypotheses must start declining in probability, but that does not prevent there from being a spike at complexity level 1,000,000, and it does not prevent complexity level 5 being more likely than complexity level 1.
Assume … is identical in all cases, and each member of … < x1, x2, and x3. If I understand the system correctly, in this case, hypotheses of complexity 2 would have a higher probability than those with complexity 1, which contradicts the conclusion, which is that hypotheses with a greater complexity value have a lower prior probability. Because the definition of complexity is vacuous, this could start at any particular n.
In something more like natural language, simply because x1 is higher than some infinite subset of x’s, that does not mean it is higher than all X’s. Xn could be greater than X1 for any arbitrary value of n. I don’t see anything that says x1 must be the greatest x, and if it is not, the conclusion appears false. There is some level of complexity that is, on average, better than every other, but there is no particular reason it must be the lowest level of complexity. Z1 could be equal to just about any natural number.
I would be totally unsurprised if I missed something here, so please correct me if I have misunderstood the proof.