This post is part of the Asymptotic Logical Uncertainty series. In this post, I present a possible fix to the strategy of using resource bounded Solomonoff induction for asymptotic logical uncertainty. This algorithm came out of conversations with Benja Fallenstein when I visited MIRI this week.
Warning: I do not have any positive results about this algorithm yet. I just suspect that it has a nontrivial chance of having some good properties, so I wanted to share it in case anyone else wants to think about it.
This algorithm takes in a Turing machine E which outputs an infinite string of bits, an integer n. The algorithm must “quickly” output a bit, and the probability that it outputs 1 represents the “probability” that the nth bit output by E is a 1.
IRBSI(E,n)
0: w=""
1: Run E for n steps to get the first k bits
2: for i = 1 to n
3: sample a randorm Turing machine M
4: for j = 1 to k
5: run M for 2^(n+j-i) steps on input the first j-1 bits output by E
6: if M does not output the jth bit output by E
7: break and goto 3
8: run M for time 2^n on input w
10: if M outputs a 1
11: concatinate a 1 to the end of w
12: else
13: concatinate a 0 to the end of w
14: return the last bit of w
The runtime of this algorithm is in O(n2n), if E is sufficiently slow.
The rough justification behind this algorithm is that the Solomonoff induction inspired approach is good at getting good probabilities for bits conditioned on all the the previous bits. The problems come from the algorithm applying those conditional probabilities to past bits that were necessarily computed with fewer resources.
This algorithm only uses Solomonoff induction only to get conditional probabilities. It uses these conditional probabilities in such a way that they are only combined with other conditional probabilities computed with the exact same 2n amount of resources. Therefore it does not run into problems from trying to stay consistent with its past dumber self.
I do not know how good this algorithm will end up being, but I suspect that it gets simple things like this correct.
It may be that to get good results for this algorithm you have to use tricks like we used with the original Solomonoff approach by sampling the Turing machines in a way that is further biased towards simple machines, or by allowing the machines to run longer by sacrificing some of their probability mass in being sampled.
Asymptotic Logical Uncertainty: Iterated Resource Bounded Solomonoff Induction
This post is part of the Asymptotic Logical Uncertainty series. In this post, I present a possible fix to the strategy of using resource bounded Solomonoff induction for asymptotic logical uncertainty. This algorithm came out of conversations with Benja Fallenstein when I visited MIRI this week.
Warning: I do not have any positive results about this algorithm yet. I just suspect that it has a nontrivial chance of having some good properties, so I wanted to share it in case anyone else wants to think about it.
This algorithm takes in a Turing machine E which outputs an infinite string of bits, an integer n. The algorithm must “quickly” output a bit, and the probability that it outputs 1 represents the “probability” that the nth bit output by E is a 1.
The runtime of this algorithm is in O(n2n), if E is sufficiently slow.
The rough justification behind this algorithm is that the Solomonoff induction inspired approach is good at getting good probabilities for bits conditioned on all the the previous bits. The problems come from the algorithm applying those conditional probabilities to past bits that were necessarily computed with fewer resources.
This algorithm only uses Solomonoff induction only to get conditional probabilities. It uses these conditional probabilities in such a way that they are only combined with other conditional probabilities computed with the exact same 2n amount of resources. Therefore it does not run into problems from trying to stay consistent with its past dumber self.
I do not know how good this algorithm will end up being, but I suspect that it gets simple things like this correct.
It may be that to get good results for this algorithm you have to use tricks like we used with the original Solomonoff approach by sampling the Turing machines in a way that is further biased towards simple machines, or by allowing the machines to run longer by sacrificing some of their probability mass in being sampled.