Just want to point out that the minimum-description-length prior and the speed prior are two very different notions of simplicity. The universal distribution Paul is referring to is probably the former. Simple in the description-length sense is not incompatible with uncomputability. In my experience of conversations at AIRCS workshops, it’s perennially considered an open question whether or not the speed prior is also full of consequentialists.
Simple in the description length sense is incompatible with uncomputability. Uncomputability means there is no finite way to point to the function. That’s what I currently think, but I’m confused about you understanding all those words and disagreeing.
There’s a level-slip here, which is more common in discussions of algorithmic probability than I realized when I wrote as if it was an obvious nitpick.
There are two ways to point at the Solomonoff prior: the operational way (Solomonoff’s original idea) and the denotational way (called a “universal mixture”). These are shown equivalent by (Hutter et al., 2011). I’ll try to explain the confusion in both ways.
Operationally, we’re going to feed an infinite stream of fair coin-flips into a universal monotone Turing machine, and then measure the distribution of the output. What we do not do is take the random noise that we’re about to feed into the Turing machine and apply a halting-oracle to it to make sure it encodes a computable function before running it. We just run it. Each such “hypothesis” may halt eventually, or it may produce output forever, or it may infinite-loop after producing some output. As a Solomonoff inductor, we’re going to have infinite patience and consider all the output it produces before either halting or infinite-looping. This is the sense in which a hypothesis doesn’t have to be “computable” to be admissible, it just has to be “a program”.
Denotationally, we’re going to mix over a computable enumeration of lower semicomputable semimeasures. (This is the perspective I prefer, I think, but it’s equivalent.) So in a really precise way, the hypotheses have to be semicomputable, which is a strictly larger set than being computable, and, in fact, the universal mixture itself is semicomputable (even though it is uncomputable).
This is why it’s not inconsistent to imagine Solomonoff inductors showing up as hypotheses inside Solomonoff induction.
Ok I understand that although I never did find a proof that they are equivalent in UAI. If you know where it is, please point it out to me.
I still think that solomonoff induction assigns 0 to uncomputable bit strings, and I don’t see why you don’t think so.
Like the outputs of programs that never halt are still computable right? I thought we were just using a “prints something eventually oracle” not a halting oracle.
Just want to point out that the minimum-description-length prior and the speed prior are two very different notions of simplicity. The universal distribution Paul is referring to is probably the former. Simple in the description-length sense is not incompatible with uncomputability. In my experience of conversations at AIRCS workshops, it’s perennially considered an open question whether or not the speed prior is also full of consequentialists.
Simple in the description length sense is incompatible with uncomputability. Uncomputability means there is no finite way to point to the function. That’s what I currently think, but I’m confused about you understanding all those words and disagreeing.
There’s a level-slip here, which is more common in discussions of algorithmic probability than I realized when I wrote as if it was an obvious nitpick.
There are two ways to point at the Solomonoff prior: the operational way (Solomonoff’s original idea) and the denotational way (called a “universal mixture”). These are shown equivalent by (Hutter et al., 2011). I’ll try to explain the confusion in both ways.
Operationally, we’re going to feed an infinite stream of fair coin-flips into a universal monotone Turing machine, and then measure the distribution of the output. What we do not do is take the random noise that we’re about to feed into the Turing machine and apply a halting-oracle to it to make sure it encodes a computable function before running it. We just run it. Each such “hypothesis” may halt eventually, or it may produce output forever, or it may infinite-loop after producing some output. As a Solomonoff inductor, we’re going to have infinite patience and consider all the output it produces before either halting or infinite-looping. This is the sense in which a hypothesis doesn’t have to be “computable” to be admissible, it just has to be “a program”.
Denotationally, we’re going to mix over a computable enumeration of lower semicomputable semimeasures. (This is the perspective I prefer, I think, but it’s equivalent.) So in a really precise way, the hypotheses have to be semicomputable, which is a strictly larger set than being computable, and, in fact, the universal mixture itself is semicomputable (even though it is uncomputable).
This is why it’s not inconsistent to imagine Solomonoff inductors showing up as hypotheses inside Solomonoff induction.
Just realized that’s not UAI. Been looking for this source everywhere, thanks.
Ok I understand that although I never did find a proof that they are equivalent in UAI. If you know where it is, please point it out to me.
I still think that solomonoff induction assigns 0 to uncomputable bit strings, and I don’t see why you don’t think so.
Like the outputs of programs that never halt are still computable right? I thought we were just using a “prints something eventually oracle” not a halting oracle.