Q: Given a question, how should we go about answering it? A: By gathering evidence effectively, and correctly applying reason and intuition.
An important point omitted in the proposed answer: Reduce the question into subquestions stated in primitively testable terms. Try to dispel confusions. Make sure the question even makes sense. For a large class of questions (all classical philosophical and ethical questions, most political ones) starting to gather evidence before the question is reduced is a mistake. This is perhaps the most important idea which can be learned from LW.
Q: How can we effectively gather relevant evidence?
When it is clear what relevant evidence is (whatever fact whose probability strongly depends on the tested hypothesis) and when a reliable intuitive understanding of probability is at hand, this should be easy. (Of course, depending on what level of effectivity are you aiming at.) Most common errors in reasoning don’t stem from lack of evidence, but from incorrect intuitive probabilistic analysis. Creationists often know a lot of relevant facts.
Q: We don’t have infinite computational resources available, so what now? A: I don’t know. (Apply Bayes’ rule anyway? Just try to emulate what a hypercomputer would do?)
Apply Bayes’ rule anyway. The result will not be perfect and you should be aware of that, but in majority of situations it’s still an improvement over intuitive guesses.
Q: How can we find our biases? A: I don’t know. (Read Less Wrong? What about our personal quirks? How can we notice those?)
It’s hard to do perfectly, of course. But simply by learning about several standard biases I was able to spot such patterns of reasoning in my thoughts and I believe I don’t commit them now so often as in the past. Personal quirks? Listen to feedback you get from others.
Q: Once we find a bias, how can we fix it?
Retreat to more formalised reasoning, if possible.
Apply Bayes’ rule anyway. The result will not be perfect and you should be aware of that, but in majority of situations it’s still an improvement over intuitive guesses.
How do you determine the relevant probabilities? What if you’re looking for, say, the probability of a nuclear attack occurring anywhere in the world in the next 20 years?
Q: Once we find a bias, how can we fix it?
Retreat to more formalised reasoning, if possible.
Yes, but that doesn’t remove the bias. Surely if it’s at all possible to remove a bias, that’s better than circumventing it through formal reasoning, because formal reasoning is much slower than intuition.
How do you determine the relevant probabilities? What if you’re looking for, say, the probability of a nuclear attack occurring anywhere in the world in the next 20 years?
Such as? I am probably unable to give you a wholly general prescription for P(X | nuclear war is going to happen) valid for all X; I have even no idea how such a prescription should look like even if infinite computation power was available, if you don’t want me to classify all sorts of information and all sorts of hypotheses relevant to a nuclear attack. Of course it would be nice to have some general prescription allowing to mechanically detect what information is relevant, but I think this a problem different from Bayesian updating.
“Apply Bayes’ rule anyway” is not a method of reasoning unless we have some way of determining what the numbers are. If we don’t have a method for finding the numbers, then we still have work to do before calling Bayes’ rule a method of reasoning.
I haven’t said we haven’t some way of determining the numbers. I have said that I can’t concisely formulate a rule whose domain of definition is the set of all possible information. What you are asking for is basically outlining a large part of the code of a general artificial intelligence. This is out of reach, but it doesn’t mean we can’t update at all. Some probabilities plugged in will almost certainly be generated by intuition, but I don’t think method of reasoning has to remove all arbitrariness to be called such.
What you are asking for is basically outlining a large part of the code of a general artificial intelligence.
Kind of! I’m asking for the best algorithm for human intelligence we can come up with. I guess that indeed, the phrase “apply Bayes’ rule” is significantly better than nothing at all.
An important point omitted in the proposed answer: Reduce the question into subquestions stated in primitively testable terms. Try to dispel confusions. Make sure the question even makes sense. For a large class of questions (all classical philosophical and ethical questions, most political ones) starting to gather evidence before the question is reduced is a mistake. This is perhaps the most important idea which can be learned from LW.
When it is clear what relevant evidence is (whatever fact whose probability strongly depends on the tested hypothesis) and when a reliable intuitive understanding of probability is at hand, this should be easy. (Of course, depending on what level of effectivity are you aiming at.) Most common errors in reasoning don’t stem from lack of evidence, but from incorrect intuitive probabilistic analysis. Creationists often know a lot of relevant facts.
Apply Bayes’ rule anyway. The result will not be perfect and you should be aware of that, but in majority of situations it’s still an improvement over intuitive guesses.
It’s hard to do perfectly, of course. But simply by learning about several standard biases I was able to spot such patterns of reasoning in my thoughts and I believe I don’t commit them now so often as in the past. Personal quirks? Listen to feedback you get from others.
Retreat to more formalised reasoning, if possible.
How do you determine the relevant probabilities? What if you’re looking for, say, the probability of a nuclear attack occurring anywhere in the world in the next 20 years?
Yes, but that doesn’t remove the bias. Surely if it’s at all possible to remove a bias, that’s better than circumventing it through formal reasoning, because formal reasoning is much slower than intuition.
What information you are updating from?
All information available to me.
Such as? I am probably unable to give you a wholly general prescription for P(X | nuclear war is going to happen) valid for all X; I have even no idea how such a prescription should look like even if infinite computation power was available, if you don’t want me to classify all sorts of information and all sorts of hypotheses relevant to a nuclear attack. Of course it would be nice to have some general prescription allowing to mechanically detect what information is relevant, but I think this a problem different from Bayesian updating.
“Apply Bayes’ rule anyway” is not a method of reasoning unless we have some way of determining what the numbers are. If we don’t have a method for finding the numbers, then we still have work to do before calling Bayes’ rule a method of reasoning.
I haven’t said we haven’t some way of determining the numbers. I have said that I can’t concisely formulate a rule whose domain of definition is the set of all possible information. What you are asking for is basically outlining a large part of the code of a general artificial intelligence. This is out of reach, but it doesn’t mean we can’t update at all. Some probabilities plugged in will almost certainly be generated by intuition, but I don’t think method of reasoning has to remove all arbitrariness to be called such.
Kind of! I’m asking for the best algorithm for human intelligence we can come up with. I guess that indeed, the phrase “apply Bayes’ rule” is significantly better than nothing at all.