[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense—or at the very least, make more sense than before—instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.
This is the first thing I’ve read from Scott Garrabant, so “otherwise reputable” doesn’t apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.
[META] As a general heuristic, when you encounter a post from someone otherwise reputable that seems completely nonsensical to you, it may be worth attempting to find some reframing of it that causes it to make sense—or at the very least, make more sense than before—instead of addressing your remarks to the current (nonsensical-seeming) interpretation. The probability that the writer of the post in question managed to completely lose their mind while writing said post is significantly lower than both the probability that you have misinterpreted what they are saying, and the probability that they are saying something non-obvious which requires interpretive effort to be understood. To maximize your chances of getting something useful out of the post, therefore, it is advisable to condition on the possibility that the post is not saying something trivially incorrect, and see where that leads you. This tends to be how mutual understanding is built, and is a good model for how charitable communication works. Your comment, to say the least, was neither.
This is the first thing I’ve read from Scott Garrabant, so “otherwise reputable” doesn’t apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.