Here is a different answer to your question, hopefully a better one.
It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.)
Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.)
Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32.
A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1⁄2 because each flag bit has a probability of 1⁄2 of being set to 0, if chosen randomly.
If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come closer to encoding a length of x in log(x) bits, but it’s more like C log(x) for some constant C. (The constant is actually 32⁄31, since for every 31 bits of length, we have 1 bit of “length of length”.)
If we iterate this, we can produce even more efficient schemes, but log(x)+1 is unreachable.
(What I have been calling the natural distribution to use for each of these schemes is something like a max-entropy distribution. The way I am defining it is to assume an infinite sequence of random bits, let the scheme read this infinite sequence until it decides that it’s reached the end of the string, and take that string’s length.)
Here is a different answer to your question, hopefully a better one.
It is no coincidence that the prior that requires log(x)+1 bits of evidence for length x does not converge. The reason for this is that you cannot specify using only log(x)+1 bits that a string has length x. Standard methods of specifying string length have various drawbacks, and correspond to different prior distributions in a natural way. (I will assume 32-bit words, and measure length in words, but you can measure length in bits if you like.)
Suppose you have a length-prefixed string. Then you pay 32 bits to encode the length; but the length can be at most 2^32-1. This corresponds to the uniform distribution that assigns all lengths between 0 and 2^32-1 equal probability. (We derive this distribution by supposing that every bit doing length-encoding duty is random and equally likely.)
Suppose you have a null-terminated string. Then you are paying a hidden linear cost: the 0 word is reserved for the terminator, so you have only 2^32-1 words to use in your message, which means you only convey log(2^32-1) bits of information per 32 bits of message. The natural distribution here is one in which every bit conveys maximal information, so each word has a 1 in 2^32 chance of being the terminator, and so the length of your string is Geometric with parameter 1/2^32.
A common scheme for big-integer types is to have a flag bit in every word that is 1 if another word follows, and 0 otherwise. This is very similar to the null-terminator scheme, and in fact the natural distribution here is also Geometric, but with parameter 1⁄2 because each flag bit has a probability of 1⁄2 of being set to 0, if chosen randomly.
If you are encoding truly enormous strings, you could use a length-prefixed string in which the length is a big integer. This is much more efficient and the natural distribution here is also much more heavy-tailed: it is something like a smoothed-out version of 2^(32 Geometric(1/2)). We have come closer to encoding a length of x in log(x) bits, but it’s more like C log(x) for some constant C. (The constant is actually 32⁄31, since for every 31 bits of length, we have 1 bit of “length of length”.)
If we iterate this, we can produce even more efficient schemes, but log(x)+1 is unreachable.
(What I have been calling the natural distribution to use for each of these schemes is something like a max-entropy distribution. The way I am defining it is to assume an infinite sequence of random bits, let the scheme read this infinite sequence until it decides that it’s reached the end of the string, and take that string’s length.)