Part of the Asymptotic Logical Uncertainty series. Here, I will start discussing the connection between asymptotic logical uncertainty and self reference.
For this post, I will switch back to my old model of asymptotical logical uncertainty. Let E be a probabilistic Turing machine which outputs an infinite string of bits. We want to construct a Turing machine M which when input with the code to T must run quickly, (e.g. must output its nth bit by time n^2) and is trying to match the output of E.
We want M to satisfy properties like the Benford Test, where if E outputs a simple subsequence that looks like it is (or actually is) coming from a p-coin, we would like the probability that M outputs a 1 for bits in that subsequence to converge to p.
We have seen one failed approach to this inspired by Solomonoff induction, and one partially successful approach, the Benford learner. (The Benford learner is in a different context, but can be easily ported over.) I will present some other programs in this context soon, including some possible fixes to the Solomonoff approach. However, right now I want to present another desired property for M.
Let E be the Turing machine which quines to run M on E, and negates the output. The the probability that the nth bit output by M on input E is a 1 should converge to 12.
The reason this is a desired property is that the probability of getting each bit correct is maximized when that probability is 12.
The Benford learner very likely gets this problem wrong. That is because it is not trying to maximize its accuracy, but instead has a pattern recognition algorithm hard coded in. An algorithm that has this desired property probably looks a lot more like Solomonoff induction. The failed Solomonoff induction inspired approach might actually gets this specific problem correct, in spite of failing the Benford Test. I have not checked.
There is another more deterministic version of the problem. Let E be the Turing machine which quines to compute the probability that M on input E outputs a 1, and outputs a 1 if and only if that probability is less than 12. The the probability that the nth bit output by M on input E is a 1 should converge to 12.
The purpose of the deterministic version is to make it work when E represents a list of logical sentences.
There are many ways we can generalize both of these desired properties. For example, we can add noise, change the probabilities, or put the reflexive sentences in subsequences where the rest of the sequence has other properties.
Asymptotic Logical Uncertainty: Self Reference
Part of the Asymptotic Logical Uncertainty series. Here, I will start discussing the connection between asymptotic logical uncertainty and self reference.
For this post, I will switch back to my old model of asymptotical logical uncertainty. Let E be a probabilistic Turing machine which outputs an infinite string of bits. We want to construct a Turing machine M which when input with the code to T must run quickly, (e.g. must output its nth bit by time n^2) and is trying to match the output of E.
We want M to satisfy properties like the Benford Test, where if E outputs a simple subsequence that looks like it is (or actually is) coming from a p-coin, we would like the probability that M outputs a 1 for bits in that subsequence to converge to p.
We have seen one failed approach to this inspired by Solomonoff induction, and one partially successful approach, the Benford learner. (The Benford learner is in a different context, but can be easily ported over.) I will present some other programs in this context soon, including some possible fixes to the Solomonoff approach. However, right now I want to present another desired property for M.
Let E be the Turing machine which quines to run M on E, and negates the output. The the probability that the nth bit output by M on input E is a 1 should converge to 12.
The reason this is a desired property is that the probability of getting each bit correct is maximized when that probability is 12.
The Benford learner very likely gets this problem wrong. That is because it is not trying to maximize its accuracy, but instead has a pattern recognition algorithm hard coded in. An algorithm that has this desired property probably looks a lot more like Solomonoff induction. The failed Solomonoff induction inspired approach might actually gets this specific problem correct, in spite of failing the Benford Test. I have not checked.
There is another more deterministic version of the problem. Let E be the Turing machine which quines to compute the probability that M on input E outputs a 1, and outputs a 1 if and only if that probability is less than 12. The the probability that the nth bit output by M on input E is a 1 should converge to 12.
The purpose of the deterministic version is to make it work when E represents a list of logical sentences.
There are many ways we can generalize both of these desired properties. For example, we can add noise, change the probabilities, or put the reflexive sentences in subsequences where the rest of the sequence has other properties.