If one has any assignment of probabilities to an infinite series of mutually exclusive hypotheses H1, H2, …, then for every epsilon > 0 there is an N such that every hypothesis after the Nth has probability less than epsilon. In fact, there is an N such that the sum of the probabilities of all the hypotheses after the Nth is less than epsilon.
Indeed you could, but that problem is already present in the definition of Kolmogorov complexity. It’s only defined up to an arbitrary additive constant determined by (in one formulation) the choice of a universal Turing machine. The Kolmogorov complexity of a string is the size of the shortest input for that UTM that produces that string as output, but there’s nothing in the definition to prevent the UTM from having any finite set of arbitrary strings on speed-dial.
Kelly deals with this by looking at complexity from other angles. For example, a complex world can give you a long sequence of observations persuading you that it’s a simple world and then suddenly “change its mind”, but a simple world cannot pretend that it’s complex.
Hmm… maybe I was reading your claim as stronger than you intended. I was imagining you were claiming that property would hold for any finite enumerated subset, which clearly isn’t what you meant.
If the sum of every term in a sequence after the Nth one is less than epsilon, then the sum of every term in any subsequence after the Nth one is also less than epsilon.
Right, but that isn’t what I meant—it is not necessarily the case that for every n, every hypothesis after the nth has probability less than that of the the nth hypothesis. Obviously—which is why I should have noticed my confusion and not misread in the first place.
Regardless of the probability distribution.
If one has any assignment of probabilities to an infinite series of mutually exclusive hypotheses H1, H2, …, then for every epsilon > 0 there is an N such that every hypothesis after the Nth has probability less than epsilon. In fact, there is an N such that the sum of the probabilities of all the hypotheses after the Nth is less than epsilon.
But N could be 3^^^3. Which does an injury to the term early in my book. E.g. I could swap the probabilities of p (x3^^^3) and p(x1).
Indeed you could, but that problem is already present in the definition of Kolmogorov complexity. It’s only defined up to an arbitrary additive constant determined by (in one formulation) the choice of a universal Turing machine. The Kolmogorov complexity of a string is the size of the shortest input for that UTM that produces that string as output, but there’s nothing in the definition to prevent the UTM from having any finite set of arbitrary strings on speed-dial.
Kelly deals with this by looking at complexity from other angles. For example, a complex world can give you a long sequence of observations persuading you that it’s a simple world and then suddenly “change its mind”, but a simple world cannot pretend that it’s complex.
Why not? It would look almost exactly like the complex worlds imitating it, wouldn’t it?
Hmm… maybe I was reading your claim as stronger than you intended. I was imagining you were claiming that property would hold for any finite enumerated subset, which clearly isn’t what you meant.
If the sum of every term in a sequence after the Nth one is less than epsilon, then the sum of every term in any subsequence after the Nth one is also less than epsilon.
Right, but that isn’t what I meant—it is not necessarily the case that for every n, every hypothesis after the nth has probability less than that of the the nth hypothesis. Obviously—which is why I should have noticed my confusion and not misread in the first place.