This is among the top questions you ought to accumulate insights on if you’re trying to do something difficult.
I would advise primarily focusing on how to learn more from yourself as opposed to learning from others, but still, here’s what I think:
I. Strict confusion
Seek to find people who seem to be doing something dumb or crazy, and for whom the feeling you get when you try to understand them is not “I’m familiar with how someone could end up believing this” but instead “I’ve got no idea how they ended up there, but that’s just absurd”. If someone believes something wild, and your response is strict confusion, that’s high value of information. You can only safely say they’re low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.
II. Surprisingly popular
Alternatively, find something that is surprisingly popular—because if you don’t understand why someone believes something, you cannot exclude that they believe it for good reasons.
The meta-trick to extracting wisdom from society’s noisy chatter is learn to understand what drives people’s beliefs in general; then, if your model fails to predict why someone believes something, you can either learn something about human behaviour, or about whatever evidence you don’t have yet.
III. Sensitivity >> specificity
It’s easy to relinquish old beliefs if you are ever-optimistic that you’ll find better ideas than whatever you have now. If you look back at what you wrote a year ago, and think “huh, that guy really had it all figured out,” you should be suspicious that you’ve stagnated. Strive to be embarrassed of your past world-model—it implies progress.
So trust your mind that it’ll adapt to new evidence, and tune your sensitivity up as high as the capacity of your discriminator allows. False-positives are usually harmless and quick to relinquish—and if they aren’t, then believing something false for as long as it takes for you to find the counter-argument is a really good way to discover general weaknesses in your epistemic filters.[1] You can’t upgrade your immune-system without exposing yourself to infection every now and then. Another frame on this:
I was being silly! If the hotel was ahead of me, I’d get there fastest if I kept going 60mph. And if the hotel was behind me, I’d get there fastest by heading at 60 miles per hour in the other direction. And if I wasn’t going to turn around yet … my best bet given the uncertainty was to check N more miles of highway first, before I turned around. — The correct response to uncertainty is *not* half-speed — LessWrong
The problem is that if you select people cautiously, you miss out on hiring people significantly more competent than you. The people who are much higher competence will behave in ways you don’t recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Innovation on the frontier is anti-inductive.
finding ppl who are truly on the right side of this graph is hard bc it’s easy to mis-see large divergence as craziness. lesson: only infer ppl’s competence by the process they use, ~never by their object-level opinions. u can ~only learn from ppl who diverge from u. — some bird
V. Confusion implies VoI, not stupidity
look for epistemic caves wherefrom survivors return confused or “obviously misguided”. — ravens can in fact talk btw
Here assuming that investing credence in the mistaken belief increased your sensitivity to finding its counterargument. For people who are still at a level where credence begets credence, this could be bad advice.
VI. Epistemic surface area / epistemic net / wind-wane models / some better metaphor
Every model you have internalised as truly part of you—however true or false—increases your ability to notice when evidence supports or conflicts with it. As long as you place your flag somewhere to begin with, the winds of evidence will start pushing it in the right direction. If your wariness re believing something verifiably false prevents you from making an epistemic income, consider what you’re really optimising for. Beliefs pay rent in anticipated experiences, regardless of whether they are correct in the end.
This is among the top questions you ought to accumulate insights on if you’re trying to do something difficult.
I would advise primarily focusing on how to learn more from yourself as opposed to learning from others, but still, here’s what I think:
I. Strict confusion
Seek to find people who seem to be doing something dumb or crazy, and for whom the feeling you get when you try to understand them is not “I’m familiar with how someone could end up believing this” but instead “I’ve got no idea how they ended up there, but that’s just absurd”. If someone believes something wild, and your response is strict confusion, that’s high value of information. You can only safely say they’re low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.
II. Surprisingly popular
Alternatively, find something that is surprisingly popular—because if you don’t understand why someone believes something, you cannot exclude that they believe it for good reasons.
The meta-trick to extracting wisdom from society’s noisy chatter is learn to understand what drives people’s beliefs in general; then, if your model fails to predict why someone believes something, you can either learn something about human behaviour, or about whatever evidence you don’t have yet.
III. Sensitivity >> specificity
It’s easy to relinquish old beliefs if you are ever-optimistic that you’ll find better ideas than whatever you have now. If you look back at what you wrote a year ago, and think “huh, that guy really had it all figured out,” you should be suspicious that you’ve stagnated. Strive to be embarrassed of your past world-model—it implies progress.
So trust your mind that it’ll adapt to new evidence, and tune your sensitivity up as high as the capacity of your discriminator allows. False-positives are usually harmless and quick to relinquish—and if they aren’t, then believing something false for as long as it takes for you to find the counter-argument is a really good way to discover general weaknesses in your epistemic filters.[1] You can’t upgrade your immune-system without exposing yourself to infection every now and then. Another frame on this:
IV. Vingean deference limits
V. Confusion implies VoI, not stupidity
Here assuming that investing credence in the mistaken belief increased your sensitivity to finding its counterargument. For people who are still at a level where credence begets credence, this could be bad advice.
VI. Epistemic surface area / epistemic net / wind-wane models / some better metaphor
Every model you have internalised as truly part of you—however true or false—increases your ability to notice when evidence supports or conflicts with it. As long as you place your flag somewhere to begin with, the winds of evidence will start pushing it in the right direction. If your wariness re believing something verifiably false prevents you from making an epistemic income, consider what you’re really optimising for. Beliefs pay rent in anticipated experiences, regardless of whether they are correct in the end.