I like this post for the way it illustrates how the probability distribution over blocks of strings changes as you increase block length.
Otherwise, I think the representation of other ideas and how they related to it is not very accurate, and might mislead reader about the consensus among academics.
As an example, strings where the frequency of substrings converges to a uniform distribution is are called “normal”. The idea that this could be the definition of a random string was a big debate through the first half of the 20th century, as people tried to put probability theory on solid foundations. But you can have a fixed, deterministic program that generates normal strings! And so people generally rejected this ideas as the definition of random. Algorithmic information theory uses the definition of Martin-Löf random, which is that an (infinite) string is random if it can’t be compressed by any program (with a bunch of subtleties and distinctions in there).
Yes, that’s right. For instance, the decimal expansion of pi is thought to be normal (but iirc unproven). The ideas in these post can all be found in academic texts.
Do let us know any specific text in text that you think is misleading or false! Thanks 😊
I like this post for the way it illustrates how the probability distribution over blocks of strings changes as you increase block length.
Otherwise, I think the representation of other ideas and how they related to it is not very accurate, and might mislead reader about the consensus among academics.
As an example, strings where the frequency of substrings converges to a uniform distribution is are called “normal”. The idea that this could be the definition of a random string was a big debate through the first half of the 20th century, as people tried to put probability theory on solid foundations. But you can have a fixed, deterministic program that generates normal strings! And so people generally rejected this ideas as the definition of random. Algorithmic information theory uses the definition of Martin-Löf random, which is that an (infinite) string is random if it can’t be compressed by any program (with a bunch of subtleties and distinctions in there).
Yes, that’s right. For instance, the decimal expansion of pi is thought to be normal (but iirc unproven). The ideas in these post can all be found in academic texts.
Do let us know any specific text in text that you think is misleading or false! Thanks 😊