It’s Luke and Eliezer’s term, but I guess the idea is similar to the one I had. You take some second-order theory, and let each string in the formal language that constitutes a description of X contribute n^-l to the probability of X, where n is the size of the alphabet, and l is the length of the string. Use the resulting distribution in place of the universal prior in Solomonoff Induction.
Let’s say the theory is ZFC. Does the string “0 if the continuum hypothesis if true, otherwise 1” contribute to the probability of 0, or the probability of 1?
(As you most likely know, in the standard interpretation of second order ZFC the continuum hypothesis has a definite truth value, but it can’t be proved or disproved in any of the deductive systems that have been proposed for second order ZFC.) My suggestion is that the string “0 if the continuum hypothesis if true, otherwise 1” contribute to the probability of 0 if the continuum hypothesis is true, and the probability of 1 if the continuum hypothesis is false. The problem that we don’t know how to deal with a probability distribution like this seems similar to the problem of not knowing how to use the universal prior in the original Solomonoff Induction, both of which seem to be instances of logical/mathematical uncertainty, and there might be a general solution to that problem.
The reason I think this is plausible is that for example P=NP may not be provable or disprovable in any formal deductive system that have been proposed, and yet any agent would still have to make decisions that are affected by its truth value, such as whether to search for a polynomial solution to 3-SAT. If there is some sort of principled way to make those decisions, perhaps the same methods can be used to make decisions involving the continuum hypothesis?
This is the second mention of second-order logical Solomonoff Induction, but I can’t imagine what such a thing would look like.
It’s Luke and Eliezer’s term, but I guess the idea is similar to the one I had. You take some second-order theory, and let each string in the formal language that constitutes a description of X contribute n^-l to the probability of X, where n is the size of the alphabet, and l is the length of the string. Use the resulting distribution in place of the universal prior in Solomonoff Induction.
Let’s say the theory is ZFC. Does the string “0 if the continuum hypothesis if true, otherwise 1” contribute to the probability of 0, or the probability of 1?
(As you most likely know, in the standard interpretation of second order ZFC the continuum hypothesis has a definite truth value, but it can’t be proved or disproved in any of the deductive systems that have been proposed for second order ZFC.) My suggestion is that the string “0 if the continuum hypothesis if true, otherwise 1” contribute to the probability of 0 if the continuum hypothesis is true, and the probability of 1 if the continuum hypothesis is false. The problem that we don’t know how to deal with a probability distribution like this seems similar to the problem of not knowing how to use the universal prior in the original Solomonoff Induction, both of which seem to be instances of logical/mathematical uncertainty, and there might be a general solution to that problem.
The reason I think this is plausible is that for example P=NP may not be provable or disprovable in any formal deductive system that have been proposed, and yet any agent would still have to make decisions that are affected by its truth value, such as whether to search for a polynomial solution to 3-SAT. If there is some sort of principled way to make those decisions, perhaps the same methods can be used to make decisions involving the continuum hypothesis?