the empirical observation that deep learning models fail to approximate the Prime Counting Function
I can’t find any empirical work on this…
If Bernhard Riemann knew of the Prime Counting Function, it would have had to be by other means than data compression
He obtained his “explicit formulas” by reasoning about an ideal object (his zeta function) which by construction, contains information about all prime numbers.
Riemann’s analysis back then was far from trivial and there were important gaps in his derivation of the explicit formulas for Prime Counting. What appears obvious now was far from obvious then.
I just appended a summary of Yang-Hui He’s experiments on the Prime Recognition problem.
Either way, I believe that additional experiments may be enlightening as the applied mathematics that mathematicians do is only true to the extent that it has verifiable consequences.
This might interest you: a language model is used to develop a model of inflation (expansion in the early universe), using a Kolmogorov-like principle (minimum description length).
I can’t find any empirical work on this…
He obtained his “explicit formulas” by reasoning about an ideal object (his zeta function) which by construction, contains information about all prime numbers.
Thank you for bringing up these points:
Riemann’s analysis back then was far from trivial and there were important gaps in his derivation of the explicit formulas for Prime Counting. What appears obvious now was far from obvious then.
I just appended a summary of Yang-Hui He’s experiments on the Prime Recognition problem.
Either way, I believe that additional experiments may be enlightening as the applied mathematics that mathematicians do is only true to the extent that it has verifiable consequences.
This might interest you: a language model is used to develop a model of inflation (expansion in the early universe), using a Kolmogorov-like principle (minimum description length).
Thank you for sharing this. 👌