Shane Legg’s text Solomonoff Induction has helped me a lot over the last few days. I was trying to iron out the kinks in my understanding of Eliezer’s argument in this old thread:
Solomonoff’s universal distribution isn’t trying to assume the universe is computable. It’s trying to beat any computable probability distribution you could put over the universe. In other words, if your attempt to calculate, to talk about and reason about the universe is computable—even if you are talking about systems that you think are uncomputable—then your ratiocinations appear somewhere in Solomonoff induction.
Eliezer’s statement is correct (more precisely, a computable human cannot beat Solomonoff in accumulated log scores by more than a constant, even if the universe is uncomputable and loves the human), but understanding his purported proof is tricky. Legg’s text doesn’t give any direct answer to the question at hand, but all the technical details in there, like the difference between “measures” and “semimeasures”, are really damn important if you want to work out the answer for yourself. I know mathematics has many areas where an “intuitive understanding” kinda sorta suffices. This is not one of those areas.
Solomonoff Induction, by Shane Legg
Shane Legg’s text Solomonoff Induction has helped me a lot over the last few days. I was trying to iron out the kinks in my understanding of Eliezer’s argument in this old thread:
Eliezer’s statement is correct (more precisely, a computable human cannot beat Solomonoff in accumulated log scores by more than a constant, even if the universe is uncomputable and loves the human), but understanding his purported proof is tricky. Legg’s text doesn’t give any direct answer to the question at hand, but all the technical details in there, like the difference between “measures” and “semimeasures”, are really damn important if you want to work out the answer for yourself. I know mathematics has many areas where an “intuitive understanding” kinda sorta suffices. This is not one of those areas.