I’m having trouble parsing your third (I don’t know what it means for a Turing machine to [fail to] “represent a computable sequence”, especially since I thought that a “computable sequence” was by definition the output of a Turning machine)
A sequence f(n) is computable if there’s a Turing machine T that given input n halts with output f(n). But, not all Turing machines halt on all inputs. It isn’t hard to make Turing machines that go into trivial infinite loops, and what is worse, Turing machines can fail to halt in much more ugly and non-obvious ways to the point where the question “Does the Turing machine M halt on input n” is not in general decidable. This is known at the Halting theorem. So if I’m using some form of Solomonoff prior I can’t even in general tell whether a machine describes a point in my hypothesis space.
What I don’t understand is your argument that there is a specific problem with logical uncertainty that doesn’t apply to implementing Solomonoff induction in general. Yes, the halting problem is undecidable, so you can’t decide if a sequence is computable; but assuming you’ve already got a Solomonoff-induction machine that can say “my probability that it will rain tomorrow is 50%”, why can’t it also say “my probability that the Riemann Hypothesis is true is 50%”?
but assuming you’ve already got a Solomonoff-induction machine that can say “my probability that it will rain tomorrow is 50%”, why can’t it also say “my probability that the Riemann Hypothesis is true is 50%”?
That’s actually a really good example. It isn’t that difficult to make a Turing machine that halts if and only if the Riemann hypothesis is true. So a system using Solomonoff induction has to recognize for starters whether or not that Turing machine halts. Essentially, in the standard version of Solomonoff induction, you need to assume that you have access to indefinitely large computing power. You can try making models about what happens when you have limited computational power in your entity (In some sense AIXI implementations and implementations of Bayesian reasoning need to do close to this). But if one doesn’t assume that one has indefinite computing power then a lot of the results about how different priors behave no longer hold (or at least the proofs don’t obviously go through). For more detail on that sort of thing I’d recommend talking to cousin_it or jimrandomh since they’ve thought and know a lot more about these issues than I do.
It isn’t that difficult to make a Turing machine that halts if and only if the Riemann hypothesis is true. So a system using Solomonoff induction has to recognize for starters whether or not that Turing machine halts.
Only in the sense that a human trying to solve the Riemann Hypothesis also has to recognize whether the same Turing machine halts.
When I talk about “going meta”, I really mean it: when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does: by using a model of the mathematical object in question that isn’t actually equivalent to that same mathematical object. It won’t be answering the question “natively”, the way a computer typically adds 3+5 (i.e. by specific addition algorithms built into it); instead, it will be more closely analogous to a computer being programmed to simulate three apples being combined with five apples on its display screen, and then count the apples by recognizing their visual representation.
So the upshot is that to be able to give an answer the question “what is your probability that this Turing machine halts?”, the Solomonoff AI does not need to solve anything equivalent to the halting problem. It just needs to examine the properties of some internal model corresponding to the label “Turing machine”, which need not be an actual Turing machine. It is in this way that uncertainty about mathematical truths is handled.
It should go without saying that this isn’t directly of use in building such an AI, because it doesn’t tell you anything about how to construct the low-level algorithms that actually run it. But this thread isn’t about how to build a Bayesian AI; rather, it’s about whether a Bayesian AI is something that it makes sense to build. And my point here is that “Well, if you had a Bayesian AI, it wouldn’t be able to give you probability estimates concerning the truth of mathematical statements” is not a valid argument on the latter question.
So the upshot is that to be able to give an answer the question “what is your probability that this Turing machine halts?”, the Solomonoff AI does not need to solve anything equivalent to the halting problem.
By “Solomonoff AI” do you mean “some computable approximation of a Solomonoff AI”? My impression is that the Solomonoff prior just does solve the halting problem, and that this is a standard proof that it is uncomputable.
when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
My impression is that the Solomonoff prior just does solve the halting problem,
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.
A sequence f(n) is computable if there’s a Turing machine T that given input n halts with output f(n). But, not all Turing machines halt on all inputs. It isn’t hard to make Turing machines that go into trivial infinite loops, and what is worse, Turing machines can fail to halt in much more ugly and non-obvious ways to the point where the question “Does the Turing machine M halt on input n” is not in general decidable. This is known at the Halting theorem. So if I’m using some form of Solomonoff prior I can’t even in general tell whether a machine describes a point in my hypothesis space.
What I don’t understand is your argument that there is a specific problem with logical uncertainty that doesn’t apply to implementing Solomonoff induction in general. Yes, the halting problem is undecidable, so you can’t decide if a sequence is computable; but assuming you’ve already got a Solomonoff-induction machine that can say “my probability that it will rain tomorrow is 50%”, why can’t it also say “my probability that the Riemann Hypothesis is true is 50%”?
That’s actually a really good example. It isn’t that difficult to make a Turing machine that halts if and only if the Riemann hypothesis is true. So a system using Solomonoff induction has to recognize for starters whether or not that Turing machine halts. Essentially, in the standard version of Solomonoff induction, you need to assume that you have access to indefinitely large computing power. You can try making models about what happens when you have limited computational power in your entity (In some sense AIXI implementations and implementations of Bayesian reasoning need to do close to this). But if one doesn’t assume that one has indefinite computing power then a lot of the results about how different priors behave no longer hold (or at least the proofs don’t obviously go through). For more detail on that sort of thing I’d recommend talking to cousin_it or jimrandomh since they’ve thought and know a lot more about these issues than I do.
Only in the sense that a human trying to solve the Riemann Hypothesis also has to recognize whether the same Turing machine halts.
When I talk about “going meta”, I really mean it: when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does: by using a model of the mathematical object in question that isn’t actually equivalent to that same mathematical object. It won’t be answering the question “natively”, the way a computer typically adds 3+5 (i.e. by specific addition algorithms built into it); instead, it will be more closely analogous to a computer being programmed to simulate three apples being combined with five apples on its display screen, and then count the apples by recognizing their visual representation.
So the upshot is that to be able to give an answer the question “what is your probability that this Turing machine halts?”, the Solomonoff AI does not need to solve anything equivalent to the halting problem. It just needs to examine the properties of some internal model corresponding to the label “Turing machine”, which need not be an actual Turing machine. It is in this way that uncertainty about mathematical truths is handled.
It should go without saying that this isn’t directly of use in building such an AI, because it doesn’t tell you anything about how to construct the low-level algorithms that actually run it. But this thread isn’t about how to build a Bayesian AI; rather, it’s about whether a Bayesian AI is something that it makes sense to build. And my point here is that “Well, if you had a Bayesian AI, it wouldn’t be able to give you probability estimates concerning the truth of mathematical statements” is not a valid argument on the latter question.
By “Solomonoff AI” do you mean “some computable approximation of a Solomonoff AI”? My impression is that the Solomonoff prior just does solve the halting problem, and that this is a standard proof that it is uncomputable.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.