So the upshot is that to be able to give an answer the question “what is your probability that this Turing machine halts?”, the Solomonoff AI does not need to solve anything equivalent to the halting problem.
By “Solomonoff AI” do you mean “some computable approximation of a Solomonoff AI”? My impression is that the Solomonoff prior just does solve the halting problem, and that this is a standard proof that it is uncomputable.
when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
My impression is that the Solomonoff prior just does solve the halting problem,
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.
By “Solomonoff AI” do you mean “some computable approximation of a Solomonoff AI”? My impression is that the Solomonoff prior just does solve the halting problem, and that this is a standard proof that it is uncomputable.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.