Well, computable normal numbers exist. if you replace pi with such a number, then we know that strings of zeroes aren’t overrepresented in the sense of asymptotic frequency. They might be overrepresented in some other sense, though. Can you define that precisely?
I don’t see why the prior should be uniform. None of the proposed priors are. Maybe I misunderstand your question?
Asymptotic frequency doesn’t explain this effect away, because the allowed complexity of subsequences of pi grows logarithmically, and so for any m there is an n such that there are no restrictions on complexity past that n. That is, it’s totally possible for a number to be asymptotically normal but still favor simpler sequences. Not sure how to formalize this or how to formalize the alternatives.
Hmm, it might be provable that some simple sequence shows up more often than 10^-m in Chaitlin’s constant in some range that depends predictably on its complexity.
I don’t see why the prior should be uniform. None of the proposed priors are. Maybe I misunderstand your question?
Basically, thought experiment 1. Suppose you have N mutually exclusive and exhaustive sentences, you know their complexities K(n), and you don’t know anything else. A uniform prior here just means assigning each of these sentences logical probability 1/N.
Next suppose you have a countable set of of mutually exclusive and exhaustive sentences, you know their complexities K(n) (alternately you could have some privileged ordering of the sentences), and you don’t know anything else. There’s no uniform prior now because it would be everywhere 0, and that’s not even a little normalizable.
Well, computable normal numbers exist. if you replace pi with such a number, then we know that strings of zeroes aren’t overrepresented in the sense of asymptotic frequency. They might be overrepresented in some other sense, though. Can you define that precisely?
I don’t see why the prior should be uniform. None of the proposed priors are. Maybe I misunderstand your question?
Asymptotic frequency doesn’t explain this effect away, because the allowed complexity of subsequences of pi grows logarithmically, and so for any m there is an n such that there are no restrictions on complexity past that n. That is, it’s totally possible for a number to be asymptotically normal but still favor simpler sequences. Not sure how to formalize this or how to formalize the alternatives.
Hmm, it might be provable that some simple sequence shows up more often than 10^-m in Chaitlin’s constant in some range that depends predictably on its complexity.
Basically, thought experiment 1. Suppose you have N mutually exclusive and exhaustive sentences, you know their complexities K(n), and you don’t know anything else. A uniform prior here just means assigning each of these sentences logical probability 1/N.
Next suppose you have a countable set of of mutually exclusive and exhaustive sentences, you know their complexities K(n) (alternately you could have some privileged ordering of the sentences), and you don’t know anything else. There’s no uniform prior now because it would be everywhere 0, and that’s not even a little normalizable.
See my post below for one way of framing it more precisely.