This is the second post in a series on Asymptotic Logical Uncertainty.
All Turing Machines we discuss will have no input, and will output an infinite (or finite) sequence of bits. We let M(n) denote the nth bit output by the Turing machine M.
Let A(n)=n↑nn be the nth Ackermann number as defined in Conway and Guy’s “The Book of Numbers.”
Let ϕ1,ϕ2,… be a simple enumeration of sentences in first order logic over ZFC. Let L be a deterministic Turing machine such that L(n)=1 if and only if ϕn is provable with proof length at most A(n). We will be discussing Turing machines which are designed to try to guess the output of L in a very limited time. It will be clear later why we need L to be computable, and therefore must define L using bounded proofs.
Given a probabilistic Turing machine M, and a Time complexity function T, we define an infinite collection of dependent random variables MT(n) for n≥1.
MT(n) is defined to be M(n) if M outputs the first n bits in time at most T(n). If M does not output n bits in time T(n), then MT(n) is either 0 or 1 uniformly at random. These random variables are dependent, because we only run M once to determine MT(n) for all n.
Let M be a probabilistic Turing machine designed to try to guess the output of L in time at most T(n)=2n. Let S={s1,s2,…} be the subset of natural numbers defining the sentences ϕsn=“The first digit of A(n) in base 10 is 1.” Let B(n) be defined to be 1 if n is a power of 10, and 1log10 otherwise. We say that M satisfies the Benford test if
limn→∞|P(MT(sn)=1)−B(n)|=0.
The Benford test is a desired property, because we believe that ϕsn is always true when n is a power of 10, and otherwise is pseudorandomly true with probability 1log10. Formally, we are making the assumption that for any probabilistic Turing machine M, the probability that MT(si)=L(si) for all 1≤i≤n is in O(∏ni=1|L(si)+B(i)−1|).
The number 1log10 comes from Benford’s Law. Intuitively, we expect that when n is not a power of 10, ϕsn is true 1log10 of the time, and A(n) is too large for there to be any procedure which predicts ϕsn better than randomly.
The Benford test is extremely easy to pass if you are allowed to hard code B(i) into your algorithm. What is more difficult is finding an algorithm which passes the test in a natural way.
The weak Benford test will refer to the version of the test where the algorithm is allowed to know that it will only be judged on its answers to ϕsn. This means that instead of trying to predict L, the algorithm tries to predict the output of LS, which is defined to agree with L on the nth bit when n∈S and output a 0 for all other bits. The distinction between the Benford test and the weak Benford test is informal. The only difference is changing the bit string that the algorithm is “trying to predict.”
Next, we will present an algorithm which uses a strategy similar to Solomonoff induction to satisfy the weak Benford test.
Asymptotic Logical Uncertainty: The Benford Test
This is the second post in a series on Asymptotic Logical Uncertainty.
All Turing Machines we discuss will have no input, and will output an infinite (or finite) sequence of bits. We let M(n) denote the nth bit output by the Turing machine M.
Let A(n)=n↑nn be the nth Ackermann number as defined in Conway and Guy’s “The Book of Numbers.” Let ϕ1,ϕ2,… be a simple enumeration of sentences in first order logic over ZFC. Let L be a deterministic Turing machine such that L(n)=1 if and only if ϕn is provable with proof length at most A(n). We will be discussing Turing machines which are designed to try to guess the output of L in a very limited time. It will be clear later why we need L to be computable, and therefore must define L using bounded proofs.
Given a probabilistic Turing machine M, and a Time complexity function T, we define an infinite collection of dependent random variables MT(n) for n≥1. MT(n) is defined to be M(n) if M outputs the first n bits in time at most T(n). If M does not output n bits in time T(n), then MT(n) is either 0 or 1 uniformly at random. These random variables are dependent, because we only run M once to determine MT(n) for all n.
Let M be a probabilistic Turing machine designed to try to guess the output of L in time at most T(n)=2n. Let S={s1,s2,…} be the subset of natural numbers defining the sentences ϕsn=“The first digit of A(n) in base 10 is 1.” Let B(n) be defined to be 1 if n is a power of 10, and 1log10 otherwise. We say that M satisfies the Benford test if
limn→∞|P(MT(sn)=1)−B(n)|=0.
The Benford test is a desired property, because we believe that ϕsn is always true when n is a power of 10, and otherwise is pseudorandomly true with probability 1log10. Formally, we are making the assumption that for any probabilistic Turing machine M, the probability that MT(si)=L(si) for all 1≤i≤n is in O(∏ni=1|L(si)+B(i)−1|).
The number 1log10 comes from Benford’s Law. Intuitively, we expect that when n is not a power of 10, ϕsn is true 1log10 of the time, and A(n) is too large for there to be any procedure which predicts ϕsn better than randomly.
The Benford test is extremely easy to pass if you are allowed to hard code B(i) into your algorithm. What is more difficult is finding an algorithm which passes the test in a natural way.
The weak Benford test will refer to the version of the test where the algorithm is allowed to know that it will only be judged on its answers to ϕsn. This means that instead of trying to predict L, the algorithm tries to predict the output of LS, which is defined to agree with L on the nth bit when n∈S and output a 0 for all other bits. The distinction between the Benford test and the weak Benford test is informal. The only difference is changing the bit string that the algorithm is “trying to predict.”
Next, we will present an algorithm which uses a strategy similar to Solomonoff induction to satisfy the weak Benford test.