This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There’s this really common error ’round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a “complex” hypothesis according to the universal prior. Obviously the “universal prior” says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category “simple”, which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer “priors”, more evidence.
ETA: I feel obliged to say that though algorithmic probability isn’t that useful for describing humans’ epistemic states, it’s very useful for talking about FAI ideas; it’s basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry—Howard isomorphism, which is pretty cool, among other reasons it’s cool.
Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.
This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There’s this really common error ’round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a “complex” hypothesis according to the universal prior. Obviously the “universal prior” says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category “simple”, which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer “priors”, more evidence.
ETA: I feel obliged to say that though algorithmic probability isn’t that useful for describing humans’ epistemic states, it’s very useful for talking about FAI ideas; it’s basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry—Howard isomorphism, which is pretty cool, among other reasons it’s cool.
I already have a post about that. Unfortunately I screwed up the terminology and was rightly called on it, but the point of the post is still valid.
Thanks. I actually found your amendment more enlightening. Props again for your focus on the technical aspects of rationality, stuff like that is the saving grace of LW.