It doesn’t: for example, the set of strings 0, 10, 110, 1110, 11110, 111110, … is infinite and prefix-free. So is the set of C-style (null-terminated) strings.
It does. The infinite sum will always converge, so by normalizing we can make it converge to 1.
Take gjm’s approach to explaining the normalization, in which the initial weight of 2^-length assigned to a hypothesis is the probability that you obtain that hypothesis by choosing bits at random. Then the normalization is just a conditional probability: you divide by the probability that, when choosing bits at random, you do eventually end up hitting a hypothesis.
Remark: these are special cases of the schema “all strings that contain exactly one instance of pattern X in possible positions Y”, where in the first case X is “0“ and Y is “anywhere” and in the second X is “00000000” and Y is “any position that’s a multiple of 8 bits”.
(Of course there are plenty of other prefix-free sets of bitstrings. For instance: interpret blocks of 8 bits as characters as in Kindly’s second example, and say that a valid program is anything that begins with a “(”, has properly matched parentheses, and ends with the ”)” matching the first “(”. This doesn’t have the convenient property that every bitstring begins with a valid program; for examples that do, take the exponential Golomb coding or the Elias omega coding of natural numbers.
It doesn’t: for example, the set of strings 0, 10, 110, 1110, 11110, 111110, … is infinite and prefix-free. So is the set of C-style (null-terminated) strings.
I see. Does the method of normalization you gave work even when there is an infinite number of hypotheses?
It does. The infinite sum will always converge, so by normalizing we can make it converge to 1.
Take gjm’s approach to explaining the normalization, in which the initial weight of 2^-length assigned to a hypothesis is the probability that you obtain that hypothesis by choosing bits at random. Then the normalization is just a conditional probability: you divide by the probability that, when choosing bits at random, you do eventually end up hitting a hypothesis.
Remark: these are special cases of the schema “all strings that contain exactly one instance of pattern X in possible positions Y”, where in the first case X is “0“ and Y is “anywhere” and in the second X is “00000000” and Y is “any position that’s a multiple of 8 bits”.
(Of course there are plenty of other prefix-free sets of bitstrings. For instance: interpret blocks of 8 bits as characters as in Kindly’s second example, and say that a valid program is anything that begins with a “(”, has properly matched parentheses, and ends with the ”)” matching the first “(”. This doesn’t have the convenient property that every bitstring begins with a valid program; for examples that do, take the exponential Golomb coding or the Elias omega coding of natural numbers.