The fact that it passes weak Benford should not be clear from this post at all, and you are correct to be skeptical from what I showed you. The complete proof is not written up in a nice way yet, because the other algorithm I will share soon is much more important, and I have been focusing on that.
The argument you present is something I am very aware of. The answer is that If there was a sequence of different non-Benfordian algorithms that did increasingly well, then if you consider the algorithm A that picks a random algorithm according to complexity, and runs that algorithm, then A will also do better than Benford (or at least not in big O of how well Benford does), just by being able to sample the algorithms in the infinite sequence.
Making the above argument work is actually the reason I use RTM(3), not RTM(1). I need the hypothetical random algorithm A to discount later algorithms significantly less than the sampling procedure.
I think the fact that this fails strong Bendord is very interesting, and I want to write more about that. I agree that I have not formally shown that it fails strong Benford, and I dont even have a proof of this. All I know is that if you replace L with something that looks at a particular subset of sentences that contains the Benford test sentences, that this approach fails strong Benford in that domain.
However, I do not think the proof that this satisfies weak Benford is all that important. Weak Benford really is a weak test, and passing it is not that impressive.
The fact that it passes weak Benford should not be clear from this post at all, and you are correct to be skeptical from what I showed you. The complete proof is not written up in a nice way yet, because the other algorithm I will share soon is much more important, and I have been focusing on that.
The argument you present is something I am very aware of. The answer is that If there was a sequence of different non-Benfordian algorithms that did increasingly well, then if you consider the algorithm A that picks a random algorithm according to complexity, and runs that algorithm, then A will also do better than Benford (or at least not in big O of how well Benford does), just by being able to sample the algorithms in the infinite sequence.
Making the above argument work is actually the reason I use RTM(3), not RTM(1). I need the hypothetical random algorithm A to discount later algorithms significantly less than the sampling procedure.
I think the fact that this fails strong Bendord is very interesting, and I want to write more about that. I agree that I have not formally shown that it fails strong Benford, and I dont even have a proof of this. All I know is that if you replace L with something that looks at a particular subset of sentences that contains the Benford test sentences, that this approach fails strong Benford in that domain.
However, I do not think the proof that this satisfies weak Benford is all that important. Weak Benford really is a weak test, and passing it is not that impressive.