Suppose we define a generalized version of Solomonoff Induction based on some second-order logic. The truth predicate for this logic can’t be defined within the logic and therefore a device that can decide the truth value of arbitrary statements in this logical has no finite description within this logic. If an alien claimed to have such a device, this generalized Solomonoff induction would assign the hypothesis that they’re telling the truth zero probability, whereas we would assign it some small but positive probability
Actually, what Tarski seems to show is that for any language for describing any set of universes, there just is no language representable inside those universes for representing arbitrary statements, with truth values, about “everything” including the language and the statements in it. If you try to invent such a language, it will end up inconsistent—not at the point where it tries to correctly assign truth, but at the point where it can represent truth, due to analogues of “This statement is false.” It isn’t needful to assign 0 or 1, in particular, to this statement; the moment you represent it, you can prove an inconsistency. But is this really proper to blame on Solomonoff induction?
But is this really proper to blame on Solomonoff induction?
I would like to have a method of induction that for any formal language, assigns a non-zero prior to the existence of a device that can enumerate or decide all true sentences in that language, or alternatively an explanation based on reasonable principles for which such devices should have zero probabilities. Right now we do not have either, and your research program for improving SI (i.e., to base it on second-order logic) will not give us either even if it’s successful. So while I’m not sure it makes sense to say I “blame” Solomonoff induction (what could that mean?), you could say that I’m not satisfied with either the status quo or any improvements to it that we can currently foresee.
Give me a set of formal languages over which you can say the phrase “for any formal language”, and the truth predicate for the union of the set won’t be in any language in the set. I’m still trying to understand how to deal with this inside AI, but I’m not sure that blaming it on second-order logical induction is putting the blame in the right place.
Again, I’m not sure what you mean by “blame” here. If you’re saying that Tarski’s result represents a problem that affects more than just attempts to generalize Solomonoff induction, then I agree.
BTW, while I have your attention, what’s your evaluation of Paul Christiano’s FAI design idea, which sort of tries to punt as many philosophical problems as possible (including this one)? I noticed that you didn’t comment in that discussion.
Actually, what Tarski seems to show is that for any language for describing any set of universes, there just is no language representable inside those universes for representing arbitrary statements, with truth values, about “everything” including the language and the statements in it. If you try to invent such a language, it will end up inconsistent—not at the point where it tries to correctly assign truth, but at the point where it can represent truth, due to analogues of “This statement is false.” It isn’t needful to assign 0 or 1, in particular, to this statement; the moment you represent it, you can prove an inconsistency. But is this really proper to blame on Solomonoff induction?
I would like to have a method of induction that for any formal language, assigns a non-zero prior to the existence of a device that can enumerate or decide all true sentences in that language, or alternatively an explanation based on reasonable principles for which such devices should have zero probabilities. Right now we do not have either, and your research program for improving SI (i.e., to base it on second-order logic) will not give us either even if it’s successful. So while I’m not sure it makes sense to say I “blame” Solomonoff induction (what could that mean?), you could say that I’m not satisfied with either the status quo or any improvements to it that we can currently foresee.
Give me a set of formal languages over which you can say the phrase “for any formal language”, and the truth predicate for the union of the set won’t be in any language in the set. I’m still trying to understand how to deal with this inside AI, but I’m not sure that blaming it on second-order logical induction is putting the blame in the right place.
Again, I’m not sure what you mean by “blame” here. If you’re saying that Tarski’s result represents a problem that affects more than just attempts to generalize Solomonoff induction, then I agree.
BTW, while I have your attention, what’s your evaluation of Paul Christiano’s FAI design idea, which sort of tries to punt as many philosophical problems as possible (including this one)? I noticed that you didn’t comment in that discussion.