you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.