Sorry for the delayed reply! I like this post, and agree with much of what you’re saying. I guess I disagree with the particular line you draw, though; I think there’s an interesting line, but it’s at “is there a Turing machine which will halt and give the answer to this problem” rather than “is there a Turing machine which will spit out the correct answer for a large enough input (but will spit out wrong answers for smaller inputs, and you don’t know what number is large enough)”. The latter of these doesn’t seem that qualitatively different to me from “is there a Turing machine which will give you the correct answer if you give it two large enough numbers (but will spit out wrong answers if the numbers aren’t large enough, and what is ‘large enough’ for the second number depends on what the first number you give it is)”, which takes you one more level up the arithmetic hierarchy; the line between the first and the second seems more significant to me than the line between the second and the third.
Regarding the reflective oracle result, yes the version presented in the post is higher up in the arithmetic hierarchy, but I think the variant discussed in this comment thread with Paul is probably approximable. As I said above, though, I’m not convinced that that’s actually the right place to put the line. Also, there’s of course an “AIXItl-like version” which is computable by limiting itself to hypotheses with bounded source code length and computation time. But most importantly in my mind, I don’t actually think it would make sense for any actual agent to compute an oracle like this; the point is to define a notion of perfect Bayesian agent which can reason about worlds containing other perfect Bayesian agents, in the hope that this model will yield useful insights about the real world of logically uncertain agents reasoning about worlds containing other logically uncertain agents, and these logically uncertain agents certainly don’t reason about each other by computing out an “oracle” first.
...though although I’m guessing the variant of the reflective oracle discussed in the comment thread may be approximable, it seems less likely that a version of AIXI can be defined based on it that would be approximable.
Sorry for the delayed reply! I like this post, and agree with much of what you’re saying. I guess I disagree with the particular line you draw, though; I think there’s an interesting line, but it’s at “is there a Turing machine which will halt and give the answer to this problem” rather than “is there a Turing machine which will spit out the correct answer for a large enough input (but will spit out wrong answers for smaller inputs, and you don’t know what number is large enough)”. The latter of these doesn’t seem that qualitatively different to me from “is there a Turing machine which will give you the correct answer if you give it two large enough numbers (but will spit out wrong answers if the numbers aren’t large enough, and what is ‘large enough’ for the second number depends on what the first number you give it is)”, which takes you one more level up the arithmetic hierarchy; the line between the first and the second seems more significant to me than the line between the second and the third.
Regarding the reflective oracle result, yes the version presented in the post is higher up in the arithmetic hierarchy, but I think the variant discussed in this comment thread with Paul is probably approximable. As I said above, though, I’m not convinced that that’s actually the right place to put the line. Also, there’s of course an “AIXItl-like version” which is computable by limiting itself to hypotheses with bounded source code length and computation time. But most importantly in my mind, I don’t actually think it would make sense for any actual agent to compute an oracle like this; the point is to define a notion of perfect Bayesian agent which can reason about worlds containing other perfect Bayesian agents, in the hope that this model will yield useful insights about the real world of logically uncertain agents reasoning about worlds containing other logically uncertain agents, and these logically uncertain agents certainly don’t reason about each other by computing out an “oracle” first.
...though although I’m guessing the variant of the reflective oracle discussed in the comment thread may be approximable, it seems less likely that a version of AIXI can be defined based on it that would be approximable.