Points in this article emerged from a conversation with Anna Salamon
I think that thinking about decimal expansions of real numbers provides a good testing ground for one’s intuition about probabilities. The context of computation is very different from most of the contexts that humans deal with; in particular it’s much cleaner. As such, this testing ground should not be used in isolation; the understanding that one reaps from it needs to be integrated with knowledge from other contexts. Despite its limitations I think that it has something to add.
Given a computable real number x, a priori the probability that any string of n decimal digits comprises the first n decimal digits of x is 10-n. For concreteness, we’ll take x to be pi. It has long been conjectured that pi is a normal number. This is consistent with the notion that the digits of pi are “random” in some sense, and in this respect pi contrasts with (say) rational numbers and Liouville’s constant.
According to the Northwestern University homepage, pi has been computed to five trillion of digits. So to the extent that one trusts the result of the computation; there exists an example of a statement which had an a priori probability of 10-n with n > 5•1012 of being true which we now know to be true with high confidence. How much should we trust the computation? Well, I don’t know whether it’s been verified independently and there are a variety of relevant issues about which I know almost nothing (coding issues; hardware issues; the degree of rigor with which the algorithm used has been proven to be correct, etc.). One would have more confidence if one knew that several independent teams had succeeded in verifying the result using different algorithms & hardware. One would have still more confidence if one were personally involved in such a team and became convinced of the solidity of the methods used. Regardless:
(a) As early as 1962 mathematicians had computed pi to 105 digits. Presumably since then their computation has been checked many times over by a diversity of people and methods. Trusting a single source is still problematic as there may have been a typo or whatever, but it seems uncontroversial to think that if one uses the nearest apparently reliable computational package (say, Wolfram Alpha) then chance that the output is correct is > 10%. Thus we see how an initial probability estimate of 10-100000 can rise to a probability over 10-1 in practice.
(b) If one was determined one could probably develop ~90% confidence the accuracy over a billion digits of pi. I say this because it’s been over 20 years since computational power and algorithms have permitted such a vast computation; presumably by studying, testing and tweaking all programs written since then, one could do many checks on the accuracy of each of the first billion digits. Assuming that this is possible, an initial probability estimate of 10-1000000000 can in practice grow as large as > 0.9.
This shows that probabilities which are apparently very small can rapidly shift to being quite large with the influx of new information. There’s more that I could say about this but I think that the chunk that I’ve written so far is enough to warrant posting and that the rest of my thoughts are sufficiently ill-formed so that I shouldn’t try to say more right now. I welcome thoughts and comments.
Decimal digit computations as a testing ground for reasoning about probabilities
Points in this article emerged from a conversation with Anna Salamon
I think that thinking about decimal expansions of real numbers provides a good testing ground for one’s intuition about probabilities. The context of computation is very different from most of the contexts that humans deal with; in particular it’s much cleaner. As such, this testing ground should not be used in isolation; the understanding that one reaps from it needs to be integrated with knowledge from other contexts. Despite its limitations I think that it has something to add.
Given a computable real number x, a priori the probability that any string of n decimal digits comprises the first n decimal digits of x is 10-n. For concreteness, we’ll take x to be pi. It has long been conjectured that pi is a normal number. This is consistent with the notion that the digits of pi are “random” in some sense, and in this respect pi contrasts with (say) rational numbers and Liouville’s constant.
According to the Northwestern University homepage, pi has been computed to five trillion of digits. So to the extent that one trusts the result of the computation; there exists an example of a statement which had an a priori probability of 10-n with n > 5•1012 of being true which we now know to be true with high confidence. How much should we trust the computation? Well, I don’t know whether it’s been verified independently and there are a variety of relevant issues about which I know almost nothing (coding issues; hardware issues; the degree of rigor with which the algorithm used has been proven to be correct, etc.). One would have more confidence if one knew that several independent teams had succeeded in verifying the result using different algorithms & hardware. One would have still more confidence if one were personally involved in such a team and became convinced of the solidity of the methods used. Regardless:
(a) As early as 1962 mathematicians had computed pi to 105 digits. Presumably since then their computation has been checked many times over by a diversity of people and methods. Trusting a single source is still problematic as there may have been a typo or whatever, but it seems uncontroversial to think that if one uses the nearest apparently reliable computational package (say, Wolfram Alpha) then chance that the output is correct is > 10%. Thus we see how an initial probability estimate of 10-100000 can rise to a probability over 10-1 in practice.
(b) If one was determined one could probably develop ~90% confidence the accuracy over a billion digits of pi. I say this because it’s been over 20 years since computational power and algorithms have permitted such a vast computation; presumably by studying, testing and tweaking all programs written since then, one could do many checks on the accuracy of each of the first billion digits. Assuming that this is possible, an initial probability estimate of 10-1000000000 can in practice grow as large as > 0.9.
This shows that probabilities which are apparently very small can rapidly shift to being quite large with the influx of new information. There’s more that I could say about this but I think that the chunk that I’ve written so far is enough to warrant posting and that the rest of my thoughts are sufficiently ill-formed so that I shouldn’t try to say more right now. I welcome thoughts and comments.