Let’s forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let’s call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that “1 + 1 = 2” is somehow proven by A(1,1) printing X?
In either case, you’ve shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent’s distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.
Let’s forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let’s call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that “1 + 1 = 2” is somehow proven by A(1,1) printing X?
In either case, you’ve shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent’s distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.