“Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?”. Your Bayes Score goes up on net ;-)
I agree that fearing making and not noticing mistakes is much better than not minding mistakes you don’t notice, but you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief. If a belief is wrong and you have good automatic processes that propagate it and that draw attention to incoherence from belief nodes being pushed back and forth from the propogation of the implications of some of your beliefs pushing in conflicting directions, you don’t even need people to criticize you, and especially to criticize you well, though both still help. I also think that simply wanting true beliefs without fearing untrue ones can produce the desired effect. A lot of people try to accomplish a lot of things with negative emotions that could be accomplished better with positive emotions. Positive emotions really do produce a greater risk of wireheading and only wanting to believe your beliefs are correct, in the absence of proper controls, but they don’t cost nearly as much mental energy per unit of effort. Increased emotional self-awareness reduces the wireheading risk, as you are more likely to notice the emotional impact of suppressed awareness of errors. Classic meditation techniques, yoga, varied life experience and physical exercise boost emotional self-awareness and have positive synergies. I can discuss this more, but once again, unfortunately mostly only in person, but I can take long pauses in the conversation if reminded.
Perhaps the difference here is one of risk sensitivity—similarly to the way a gambler going strictly for long term gains over the largest number of iterations will use the Kelly Criterion, Michael Vassar optimizes for becoming the least wrong when scores are tallied up at the end of the game. Wei Dai would prefer to minimize the volatility of his wrongness instead, taking smaller but steadier gains in correctness.
you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.
“Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?”. Your Bayes Score goes up on net ;-)
I agree that fearing making and not noticing mistakes is much better than not minding mistakes you don’t notice, but you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief. If a belief is wrong and you have good automatic processes that propagate it and that draw attention to incoherence from belief nodes being pushed back and forth from the propogation of the implications of some of your beliefs pushing in conflicting directions, you don’t even need people to criticize you, and especially to criticize you well, though both still help. I also think that simply wanting true beliefs without fearing untrue ones can produce the desired effect. A lot of people try to accomplish a lot of things with negative emotions that could be accomplished better with positive emotions. Positive emotions really do produce a greater risk of wireheading and only wanting to believe your beliefs are correct, in the absence of proper controls, but they don’t cost nearly as much mental energy per unit of effort. Increased emotional self-awareness reduces the wireheading risk, as you are more likely to notice the emotional impact of suppressed awareness of errors. Classic meditation techniques, yoga, varied life experience and physical exercise boost emotional self-awareness and have positive synergies. I can discuss this more, but once again, unfortunately mostly only in person, but I can take long pauses in the conversation if reminded.
Perhaps the difference here is one of risk sensitivity—similarly to the way a gambler going strictly for long term gains over the largest number of iterations will use the Kelly Criterion, Michael Vassar optimizes for becoming the least wrong when scores are tallied up at the end of the game. Wei Dai would prefer to minimize the volatility of his wrongness instead, taking smaller but steadier gains in correctness.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.