My impression is that the Solomonoff prior just does solve the halting problem,
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
when the Solomonoff machine that I have in mind is considering “whether this sequence is computable” or “whether the Riemann Hypothesis is true” or more generally “whether this Turing machine halts”, it is going to be doing so the same way a human does.
Humans are bad at this. Is there some reason to think that a “the Solomonoff machine you have in mind” will be better at it?
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.
The programmer would need to have solved the halting problem in order to program the Solomonoff prior into the AI, but the AI itself would not be solving the halting problem.
It may or may not be (though hopefully it would be more intelligent than humans—that’s the point after all!); it doesn’t matter for the purpose of this argument.
This strikes me as a confused way of looking at things. As you know, it’s a theorem that the halting problem is unsolvable in a technical sense. That theorem expresses a limitation on computer programs, not computer programmers.