It’s a good point: having a target in mind, and then searching for the best arguments for it, is a familiar experience.
The “bottom line” might be a math theorem I’m trying to prove. Or a claim in a discussion with friends that I’m trying to argue for. Or the hope that I can solve a programming problem within certain constraints. In all these cases, my thinking is directed to a goal. And I can’t currently imagine (which is weak evidence) a form of thinking that isn’t, yet produces good results reliably.
The rationalist ideal isn’t “don’t argue towards a goal”. It’s “always remember that until you have a rock-solid argument, the goal is desired but unproven, and to be treated as an assumption at best. Always be ready to discard or modify the goal, as evidence comes in. And when you do have a proof, have it checked by counter-motivated others.”
The Bottom Line doesn’t tell you to argue without a goal. It just tells you to weight all evidence correctly, for and against your goal. It may help if your goal is explicitly formulated as a question rather than a statement which might be false. But in the end, it’s just about fighting biases—remembering what we’ve assumed without proof to explore its consequences if true, and so on.
From that post, a description of wrong arguing:
First, he writes, “And therefore, box B contains the diamond!” at the bottom of his sheet of paper. Then, at the top of the paper, he writes, “Box B shows a blue stamp,” [.… and so on]; yet the clever arguer neglects all those signs which might argue in favor of box A.
(my emphasis). And a description of right arguing:
she first writes down all the distinguishing signs of both boxes on a sheet of paper, and then applies her knowledge and the laws of probability and writes down at the bottom: “Therefore, I estimate an 85% probability that box B contains the diamond.”
One difference is indeed that the second arguer did not write a bottom line before writing out all the evidence. But that’s misleading. The second arguer already knew that she was arguing about which box contained the diamond. She had mentally written down at the bottom “and therefore box B contains the diamond with ___ probability”, and filled in the actual probability later.
Bayes-wise that is identical to writing, first, “box B contains the diamond with 50% probability” (your prior); and then modifying that number as each piece of evidence is considered. The real difference is that you must not ignore or mistreat part of the evidence, as the first arguer did.
And if you trust your lack of biases enough, you may even write down as the bottom line, “box B contains the diamond with 78% probability” (or whatever your preconceived belief is). And then, as you evaluate the evidence, instead of modifying that number, you append to the end: “I currently give this proposition such and such a probability (belief) distribution.” And you modify that last statement as the evidence comes in.
The whole issue may be summarized as follows:
We must update on all evidence fairly, including evidence which causes us to lower the probability estimate of hypotheses we hold dear, which is a human bias to privilege.
To avoid this bias, it can help not to mentally give any idea such a privileged status to begin with.
Do not hold ideas dear. Hold reality dear.
Of course that should not be taken to mean you shouldn’t have goals that you try to argue for! Just remember that your attempts to argue can fail, and update accordingly.
In the spirit of further contrarianism, I’ll note that although your points are all valid, they do not really save the message of “The Bottom Line” post, unless you start interpreting the message in a rather liberal way instead of taking it literally, and this is undesirable under commonly held LW values.
[For example, atheists usually balk when people start interpreting the bible left and right, keeping the desirable conclusions
and throwing away the rest, etc.]
No, it’s functionally identical to the original analogy. Rationalists make it easy to change their bottom line as new evidence comes in, so their bottom line isn’t fixed forever at the start.
For example, I recently scrapped a post because I found out that the anecdote I was going to start with wasn’t what I thought it was at all, which raised my estimate that I was oversimplifying the rest of it. Yeah, I started with an idea of what I wanted to write, but when I learned new things I changed my confidence in that idea.
I agree. “The Bottom Line” is not formulated as well as it might have been. It is possible to come away with a literal understanding like yours, which is wrong in important respects.
((edited here) There’s no point in discussing what the post “really” means. Its only function is to transmit ideas to readers. People’s understanding of it may be a map, but it’s the map we care about here, more than the territory.)
It’s a good point: having a target in mind, and then searching for the best arguments for it, is a familiar experience.
The “bottom line” might be a math theorem I’m trying to prove. Or a claim in a discussion with friends that I’m trying to argue for. Or the hope that I can solve a programming problem within certain constraints. In all these cases, my thinking is directed to a goal. And I can’t currently imagine (which is weak evidence) a form of thinking that isn’t, yet produces good results reliably.
The rationalist ideal isn’t “don’t argue towards a goal”. It’s “always remember that until you have a rock-solid argument, the goal is desired but unproven, and to be treated as an assumption at best. Always be ready to discard or modify the goal, as evidence comes in. And when you do have a proof, have it checked by counter-motivated others.”
The Bottom Line doesn’t tell you to argue without a goal. It just tells you to weight all evidence correctly, for and against your goal. It may help if your goal is explicitly formulated as a question rather than a statement which might be false. But in the end, it’s just about fighting biases—remembering what we’ve assumed without proof to explore its consequences if true, and so on.
From that post, a description of wrong arguing:
(my emphasis). And a description of right arguing:
One difference is indeed that the second arguer did not write a bottom line before writing out all the evidence. But that’s misleading. The second arguer already knew that she was arguing about which box contained the diamond. She had mentally written down at the bottom “and therefore box B contains the diamond with ___ probability”, and filled in the actual probability later.
Bayes-wise that is identical to writing, first, “box B contains the diamond with 50% probability” (your prior); and then modifying that number as each piece of evidence is considered. The real difference is that you must not ignore or mistreat part of the evidence, as the first arguer did.
And if you trust your lack of biases enough, you may even write down as the bottom line, “box B contains the diamond with 78% probability” (or whatever your preconceived belief is). And then, as you evaluate the evidence, instead of modifying that number, you append to the end: “I currently give this proposition such and such a probability (belief) distribution.” And you modify that last statement as the evidence comes in.
The whole issue may be summarized as follows:
We must update on all evidence fairly, including evidence which causes us to lower the probability estimate of hypotheses we hold dear, which is a human bias to privilege.
To avoid this bias, it can help not to mentally give any idea such a privileged status to begin with.
Do not hold ideas dear. Hold reality dear.
Of course that should not be taken to mean you shouldn’t have goals that you try to argue for! Just remember that your attempts to argue can fail, and update accordingly.
In the spirit of further contrarianism, I’ll note that although your points are all valid, they do not really save the message of “The Bottom Line” post, unless you start interpreting the message in a rather liberal way instead of taking it literally, and this is undesirable under commonly held LW values.
[For example, atheists usually balk when people start interpreting the bible left and right, keeping the desirable conclusions and throwing away the rest, etc.]
No, it’s functionally identical to the original analogy. Rationalists make it easy to change their bottom line as new evidence comes in, so their bottom line isn’t fixed forever at the start.
For example, I recently scrapped a post because I found out that the anecdote I was going to start with wasn’t what I thought it was at all, which raised my estimate that I was oversimplifying the rest of it. Yeah, I started with an idea of what I wanted to write, but when I learned new things I changed my confidence in that idea.
I agree. “The Bottom Line” is not formulated as well as it might have been. It is possible to come away with a literal understanding like yours, which is wrong in important respects.
((edited here) There’s no point in discussing what the post “really” means. Its only function is to transmit ideas to readers. People’s understanding of it may be a map, but it’s the map we care about here, more than the territory.)