Equidistribution, of course, doesn’t imply “equal chances” for Nth prime. It’s 3. If you don’t have time to calculate 3, there’s no theorem saying you can’t conclude something (trivially, in edge cases where you have almost enough time you might be able to exclude 1 somehow). Other interesting thing is that 2^n-1 are rarely primes (n has to be prime and then still usually not), and we know that 2^n-1 don’t end in 9 ; impact of this decreases as n grows though. All sorts of small subtle things going on. (And yes, the resulting difference in “probabilities” is very small).
If you were able to conclude additional information about the one trillionth prime OTHER THAN that it is a large prime number which is approximately of size 27 trillion, then that information MAY contain information about that prime modulo 10, I agree.
I would be very surprised (p = .01) if there actually are results of the form “the nth prime has such and such characteristics modulo whatever” because of the strong equidistribution results that do exist, and obviously after someone computes the answer it is known. If you look at prime numbers and apply techniques from probability theory, it does work, and it works beautifully well. Adding information beyond “high prime” would allow you to apply techniques from probability theory to deal with logical uncertainty, and it would work well. The examples you give seem to be examples of this.
My core point is that other problems may be less susceptible to approaches from probability theory. Even if looking at digits of prime numbers is not a probability theory problem, using techniques from probability theory accesses information from theorems you may not have proven. Using techniques from probability theory elsewhere does not do this, because those theorems you haven’t proven aren’t even true.
So I am concerned, when someone purports to solve the problem of logical uncertainty, that their example problem is one whose solution looks like normal uncertainty.
I don’t think we disagree on any matters of fact; I think we may disagree about the definition of the word “probability.”
Yes, I agree. In other problems, your “probabilities” are going to not be statistically independent from basic facts of mathematics. I myself posted a top level comment with regards to ultimate futility of ‘probabilistic’ approach. Probability is not just like logic. If you have graph with loops or cycles, it is incredibly expensive. It isn’t some reals flowing through network of tubes, sides of a loop are not statistically independent. It doesn’t cut down your time at all, except in extreme examples (I once implemented a cryptographic algorithm dependent on Miller–Rabin primality test, which is “probabilistic”, and my understanding is that this is common in cryptography and is used by your browser any time you establish a SSL connection)
Equidistribution, of course, doesn’t imply “equal chances” for Nth prime. It’s 3. If you don’t have time to calculate 3, there’s no theorem saying you can’t conclude something (trivially, in edge cases where you have almost enough time you might be able to exclude 1 somehow). Other interesting thing is that 2^n-1 are rarely primes (n has to be prime and then still usually not), and we know that 2^n-1 don’t end in 9 ; impact of this decreases as n grows though. All sorts of small subtle things going on. (And yes, the resulting difference in “probabilities” is very small).
If you were able to conclude additional information about the one trillionth prime OTHER THAN that it is a large prime number which is approximately of size 27 trillion, then that information MAY contain information about that prime modulo 10, I agree.
I would be very surprised (p = .01) if there actually are results of the form “the nth prime has such and such characteristics modulo whatever” because of the strong equidistribution results that do exist, and obviously after someone computes the answer it is known. If you look at prime numbers and apply techniques from probability theory, it does work, and it works beautifully well. Adding information beyond “high prime” would allow you to apply techniques from probability theory to deal with logical uncertainty, and it would work well. The examples you give seem to be examples of this.
My core point is that other problems may be less susceptible to approaches from probability theory. Even if looking at digits of prime numbers is not a probability theory problem, using techniques from probability theory accesses information from theorems you may not have proven. Using techniques from probability theory elsewhere does not do this, because those theorems you haven’t proven aren’t even true. So I am concerned, when someone purports to solve the problem of logical uncertainty, that their example problem is one whose solution looks like normal uncertainty.
I don’t think we disagree on any matters of fact; I think we may disagree about the definition of the word “probability.”
Yes, I agree. In other problems, your “probabilities” are going to not be statistically independent from basic facts of mathematics. I myself posted a top level comment with regards to ultimate futility of ‘probabilistic’ approach. Probability is not just like logic. If you have graph with loops or cycles, it is incredibly expensive. It isn’t some reals flowing through network of tubes, sides of a loop are not statistically independent. It doesn’t cut down your time at all, except in extreme examples (I once implemented a cryptographic algorithm dependent on Miller–Rabin primality test, which is “probabilistic”, and my understanding is that this is common in cryptography and is used by your browser any time you establish a SSL connection)
Okay yes we are in agreement.