Sure, we might need an oracle to figure out if a given program outputs anything at all, but we would not need to assign a probability of 1 to Fermat’s last theorem (or at least I can’t figure out why we would).
Fermat’s Last Theorem states that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two. Consider a program that iterates over all possible values of a, b, c, n looking for counterexamples for FLT, then if it finds one, calls a subroutine that eventually prints out X (where X is your current observation). In order to do Solomonoff induction, you need to query a halting oracle on this program. But knowing whether this program halts or not is equivalent to knowing whether FLT is true or false.
Let’s forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let’s call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that “1 + 1 = 2” is somehow proven by A(1,1) printing X?
In either case, you’ve shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent’s distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.
Awesome. I’m pretty sure you’re right; that’s the most convincing counterexample I’ve come across.
I have a weak doubt, but I think you can get rid of it:
let’s name the program FTL()
I’m just not sure this means that the theorem itself is assigned a probability. Yes, I have an oracle, but it doesn’t assign a probability to a program halting; it tells me whether it halts or not. What the Solomoff formalism requires is that “if (halts(FTL()) == true) then P(X|FTL()) = 1” and “if (halts(FTL()) == false) then P(X|FTL()) = 0″ and “P(FTL()) = 2^-K(FTL())”. Where in all this is the probability of Fermat’s last theorem? Having an oracle may imply knowing whether or not FTL is a theorem, but it does not imply that we must assign that theorem a probability of 1. (Or maybe, it does and I’m not seeing it.)
Edit: Come to think of it… I’m not sure there’s a relevant difference between knowing whether a program that outputs True iff theorem S is provable will end up halting, and assigning probability 1 to theorem S. It does seem that I must assign 1 to statements of the form “A or ~ A” or else it won’t work; whereas if the theorem S is is not in the domain of our probability function, nothing seems to go wrong.
In either case, this probably isn’t the standard reason for believing in, or thinking about logical omniscience because the concept of logical omniscience is probably older than Solomonoff induction. (I am of course only realizing that in hindsight; now that I’ve seen a powerful counter example to my argument.)
Fermat’s Last Theorem states that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two. Consider a program that iterates over all possible values of a, b, c, n looking for counterexamples for FLT, then if it finds one, calls a subroutine that eventually prints out X (where X is your current observation). In order to do Solomonoff induction, you need to query a halting oracle on this program. But knowing whether this program halts or not is equivalent to knowing whether FLT is true or false.
Let’s forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let’s call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that “1 + 1 = 2” is somehow proven by A(1,1) printing X?
In either case, you’ve shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent’s distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.
Awesome. I’m pretty sure you’re right; that’s the most convincing counterexample I’ve come across.
I have a weak doubt, but I think you can get rid of it:
let’s name the program FTL()
I’m just not sure this means that the theorem itself is assigned a probability. Yes, I have an oracle, but it doesn’t assign a probability to a program halting; it tells me whether it halts or not. What the Solomoff formalism requires is that “if (halts(FTL()) == true) then P(X|FTL()) = 1” and “if (halts(FTL()) == false) then P(X|FTL()) = 0″ and “P(FTL()) = 2^-K(FTL())”. Where in all this is the probability of Fermat’s last theorem? Having an oracle may imply knowing whether or not FTL is a theorem, but it does not imply that we must assign that theorem a probability of 1. (Or maybe, it does and I’m not seeing it.)
Edit: Come to think of it… I’m not sure there’s a relevant difference between knowing whether a program that outputs True iff theorem S is provable will end up halting, and assigning probability 1 to theorem S. It does seem that I must assign 1 to statements of the form “A or ~ A” or else it won’t work; whereas if the theorem S is is not in the domain of our probability function, nothing seems to go wrong.
In either case, this probably isn’t the standard reason for believing in, or thinking about logical omniscience because the concept of logical omniscience is probably older than Solomonoff induction. (I am of course only realizing that in hindsight; now that I’ve seen a powerful counter example to my argument.)