Also keep in mind that algorithmic information/probability theory is actually quite hard to interpret correctly—the basic, intuitive way to read meaning into the math is not quite the way to do it. cousin_it has a post or two correcting some intuitive errors of interpretation.
Alas, none of those are the relevant ones I think. I’m actually rather busy visiting home, so I can only justify certain comments to myself, but I hope someone provides the links.
For what it’s worth, I’m a little skeptical of lukeprog’s understanding of SI—no offense to him meant, it’s just I so happen to believe he made a rather big error when interpreting the math. On the other hand, cousin_it seems to be really on the ball here. Those are just my impressions; I’m a pretend philosopher, not a compsci dude. At any rate I think it’d be just dandy for cousin_it to check Luke’s posts and share his impression or critiques.
we can adopt the general rule that mentioning K-complexity in a discussion of physics is always a sign of confusion :-)
Mentioning it anywhere except algorithmic information theory is a sign of confusion. This includes theology and parapsychology. Use just Bayes or, if you want to be all fancy, updateless-like decision theories. I love algorithmic probability to death but it’s just not something you should use casually. Too many pitfalls.
Also keep in mind that algorithmic information/probability theory is actually quite hard to interpret correctly—the basic, intuitive way to read meaning into the math is not quite the way to do it. cousin_it has a post or two correcting some intuitive errors of interpretation.
I found these:
Intuitive Explanation of Solomonoff Induction—lukeprog
Does Solomonoff always win? - cousin_it
K-complexity of everyday things—cousin_it
Solomonoff Induction, by Shane Legg—cousin_it
I would appreciate it if people could link me to more.
Alas, none of those are the relevant ones I think. I’m actually rather busy visiting home, so I can only justify certain comments to myself, but I hope someone provides the links.
For what it’s worth, I’m a little skeptical of lukeprog’s understanding of SI—no offense to him meant, it’s just I so happen to believe he made a rather big error when interpreting the math. On the other hand, cousin_it seems to be really on the ball here. Those are just my impressions; I’m a pretend philosopher, not a compsci dude. At any rate I think it’d be just dandy for cousin_it to check Luke’s posts and share his impression or critiques.
Here’s one I was thinking of:
The prior of a hypothesis does not depend on its complexity—cousin_it
(If I recall, Nesov’s comment clearly demonstrates the important point.)
That post seems to mix together the concept of a prior with the concept of experience.
http://lesswrong.com/lw/328/description_complexity_an_apology_and_note_on/
Mentioning it anywhere except algorithmic information theory is a sign of confusion. This includes theology and parapsychology. Use just Bayes or, if you want to be all fancy, updateless-like decision theories. I love algorithmic probability to death but it’s just not something you should use casually. Too many pitfalls.
Bayes requires a prior.
No one should ever need to discuss “priors”. Focus on the likelihood ratio.
...but that’s like comparing apples and cheese!