This is a good point, but I think there’s a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don’t have the burden to formulate an argument because, implicitly, you’re referring to the first person’s argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn’t need to.
But on reflection, this arrangement is fallacious. Why shouldn’t agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don’t think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn’t the kind of agreement we’re talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven’t developed the same argument independently. Worse, I’ve just biased you towards my argument.
The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an “argument”. In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.
In that case you’re “writing the last line first”, I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn’t foolproof either, but might be less problematic.
In real life this is common, and the results are not always bad. It’s incredibly common in mathematics. For example, Fermat’s Last Theorem was a “last line” for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn’t, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there’s another “last line first” which turned out pretty well in the end.
It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can’t tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.
Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won’t let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.
Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don’t invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.
A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn’t good intentions on the part of the experimenter. That’s the whole point of the experiment: we can’t trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)
Now let’s change both the intention and the method. Suppose you don’t know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is “preponderance of evidence”. It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You’re subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn’t mean to!
Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis
Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer’s own description:
Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”
This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.
I think the thing which is jumping out as strange to me is doing this after you’ve been convinced, seemingly to enhance your credence. Still, this is a good point.
The danger that Eliezer warns against is absolutely real. So what’s special about math? In the case of math, I think that there is something special, and that is that it’s really, really hard to make a bogus argument in math and pass it by somebody who’s paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.
So why is there ever a danger? The problem seems to arise with the mode of argument that involves “the preponderance of evidence”. That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you’ll find in the world.
The two methods can be combined:
When you read something you agree with, try to come up with a counterargument, if you can’t refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
Sorry, I’m not exactly sure what “writing the last line first” means. I’m guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
I’m referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position—and if it’s right, you’re bound to end up with new arguments in favor of it anyway.
I agree, though, that agreement should not be taken as license to avoid engaging with a position.
I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration—and if we’re talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don’t think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
This is a good point, but I think there’s a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don’t have the burden to formulate an argument because, implicitly, you’re referring to the first person’s argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn’t need to.
But on reflection, this arrangement is fallacious. Why shouldn’t agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don’t think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn’t the kind of agreement we’re talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven’t developed the same argument independently. Worse, I’ve just biased you towards my argument.
The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an “argument”. In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.
In that case you’re “writing the last line first”, I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn’t foolproof either, but might be less problematic.
In real life this is common, and the results are not always bad. It’s incredibly common in mathematics. For example, Fermat’s Last Theorem was a “last line” for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn’t, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there’s another “last line first” which turned out pretty well in the end.
No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can’t tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.
Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.
Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won’t let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.
Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don’t invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.
A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn’t good intentions on the part of the experimenter. That’s the whole point of the experiment: we can’t trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)
Now let’s change both the intention and the method. Suppose you don’t know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is “preponderance of evidence”. It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You’re subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn’t mean to!
Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer’s own description:
This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.
I think the thing which is jumping out as strange to me is doing this after you’ve been convinced, seemingly to enhance your credence. Still, this is a good point.
The danger that Eliezer warns against is absolutely real. So what’s special about math? In the case of math, I think that there is something special, and that is that it’s really, really hard to make a bogus argument in math and pass it by somebody who’s paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.
So why is there ever a danger? The problem seems to arise with the mode of argument that involves “the preponderance of evidence”. That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you’ll find in the world.
The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can’t refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
Sorry, I’m not exactly sure what “writing the last line first” means. I’m guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
I’m referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position—and if it’s right, you’re bound to end up with new arguments in favor of it anyway.
I agree, though, that agreement should not be taken as license to avoid engaging with a position.
I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration—and if we’re talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don’t think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
Y’know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.