In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There’s also a sense that “there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine”.
However, that’s not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don’t only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say “6 members marked their broad agreement with this point (click for list of members)”.
However, that’s not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don’t only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say “6 members marked their broad agreement with this point (click for list of members)”.
This is a good point, but I think there’s a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don’t have the burden to formulate an argument because, implicitly, you’re referring to the first person’s argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn’t need to.
But on reflection, this arrangement is fallacious. Why shouldn’t agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don’t think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn’t the kind of agreement we’re talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven’t developed the same argument independently. Worse, I’ve just biased you towards my argument.
The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an “argument”. In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.
In that case you’re “writing the last line first”, I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn’t foolproof either, but might be less problematic.
In real life this is common, and the results are not always bad. It’s incredibly common in mathematics. For example, Fermat’s Last Theorem was a “last line” for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn’t, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there’s another “last line first” which turned out pretty well in the end.
It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can’t tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.
Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won’t let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.
Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don’t invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.
A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn’t good intentions on the part of the experimenter. That’s the whole point of the experiment: we can’t trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)
Now let’s change both the intention and the method. Suppose you don’t know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is “preponderance of evidence”. It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You’re subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn’t mean to!
Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis
Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer’s own description:
Then, one of the detectives says, “Well… we have no idea who did it… no particular evidence singling out any of the million people in this city… but let’s consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln. It could have been him, after all.”
This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.
I think the thing which is jumping out as strange to me is doing this after you’ve been convinced, seemingly to enhance your credence. Still, this is a good point.
The danger that Eliezer warns against is absolutely real. So what’s special about math? In the case of math, I think that there is something special, and that is that it’s really, really hard to make a bogus argument in math and pass it by somebody who’s paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.
So why is there ever a danger? The problem seems to arise with the mode of argument that involves “the preponderance of evidence”. That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you’ll find in the world.
The two methods can be combined:
When you read something you agree with, try to come up with a counterargument, if you can’t refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
Sorry, I’m not exactly sure what “writing the last line first” means. I’m guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
I’m referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position—and if it’s right, you’re bound to end up with new arguments in favor of it anyway.
I agree, though, that agreement should not be taken as license to avoid engaging with a position.
I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration—and if we’re talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don’t think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There’s also a sense that “there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine”.
Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?
If they post just a “Amazing post, as usual Eliezer” without further informative contribution, then I too get this mild sense of “sucking up” going on.
Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of “Eliezer is the rationality God that we should all worship”. I don’t blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be “alpha-male mammalian evolution society” stuff.
Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won’t get into which one is “smarter”, as they are both at least two levels above me) and I feel he is often—“under-appreciated” perhaps is the closest word?-- perhaps because he doesn’t posts as often, but perhaps also because people tend to “me too” Eliezer a lot more often than they “me too” Robin (but again this might be because EY posts much more frequently than RH).
It’s simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)
Please note that this isn’t a denigration—I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn’t make the effect go away.
(To a certain extent, it’s just audience-selection—expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don’t care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin’s posts, but they don’t cause me to feel the same degree of personal connection to their author.)
Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won’t get into which one is “smarter”, as they are both at least two levels above me) and I feel he is often—“under-appreciated” perhaps is the closest word?-- perhaps because he doesn’t posts as often, but perhaps also because people tend to “me too” Eliezer a lot more often than they “me too” Robin (but again this might be because EY posts much more frequently than RH).
I do believe I under appreciate Robin. However, what it feels like to me is that my personality at I suspect a gentic level is more similar to that of Eleizer than of Robin. In particular my impression of Robin is that he is more talented than Eleizer at social kinds of cognition. That does not mean I think Robin is less rational. It means that when I read Eleizer’s work I think “yeah, that’s bloody obvious!” whereas some of Robin’s significant contributions I actually have to actively account for my own biasses and work to consider his expertise and that of those he refers to.
My suspicion is that people who have similar minds to Robin would be less inclined to be involved in rationalist discourse than the more instinctively individualist. This accounts somewhat for the differences in ’me too’s but if anything makes Robin more remarkable.
In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There’s also a sense that “there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine”.
However, that’s not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don’t only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say “6 members marked their broad agreement with this point (click for list of members)”.
That would be great.
That would be a great feature, I think. Ditto on on broad disagreements.
This is a good point, but I think there’s a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don’t have the burden to formulate an argument because, implicitly, you’re referring to the first person’s argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn’t need to.
But on reflection, this arrangement is fallacious. Why shouldn’t agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don’t think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn’t the kind of agreement we’re talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven’t developed the same argument independently. Worse, I’ve just biased you towards my argument.
The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an “argument”. In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.
In that case you’re “writing the last line first”, I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn’t foolproof either, but might be less problematic.
In real life this is common, and the results are not always bad. It’s incredibly common in mathematics. For example, Fermat’s Last Theorem was a “last line” for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also “last line first”. That is, at the start you state the hypothesis that you’re about to test, and then you test the hypothesis—which test, depending on the result, may amount to an argument from evidence for the hypothesis.
Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn’t, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there’s another “last line first” which turned out pretty well in the end.
No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can’t tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.
Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.
Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won’t let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.
Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don’t invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.
A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn’t good intentions on the part of the experimenter. That’s the whole point of the experiment: we can’t trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)
Now let’s change both the intention and the method. Suppose you don’t know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is “preponderance of evidence”. It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You’re subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn’t mean to!
Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer’s own description:
This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.
I think the thing which is jumping out as strange to me is doing this after you’ve been convinced, seemingly to enhance your credence. Still, this is a good point.
The danger that Eliezer warns against is absolutely real. So what’s special about math? In the case of math, I think that there is something special, and that is that it’s really, really hard to make a bogus argument in math and pass it by somebody who’s paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.
So why is there ever a danger? The problem seems to arise with the mode of argument that involves “the preponderance of evidence”. That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you’ll find in the world.
The two methods can be combined: When you read something you agree with, try to come up with a counterargument, if you can’t refute the counterargument, post it, if you can, then post both the counterargument and its refutation.
Sorry, I’m not exactly sure what “writing the last line first” means. I’m guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?
I’m referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position—and if it’s right, you’re bound to end up with new arguments in favor of it anyway.
I agree, though, that agreement should not be taken as license to avoid engaging with a position.
I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration—and if we’re talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.
Should arguers be encouraged, then, to not write all the arguments if favor of their claim in order to leave more room for those who agree with them to add their own supporting arguments?
This requires either refraining from fully exploring the subject (so that you don’t think of all the arguments you can) or straight out omitting arguments you thought of. Not exactly Dark Side, but not fully Light Side either...
Y’know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.
Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?
If they post just a “Amazing post, as usual Eliezer” without further informative contribution, then I too get this mild sense of “sucking up” going on.
Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of “Eliezer is the rationality God that we should all worship”. I don’t blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be “alpha-male mammalian evolution society” stuff.
Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won’t get into which one is “smarter”, as they are both at least two levels above me) and I feel he is often—“under-appreciated” perhaps is the closest word?-- perhaps because he doesn’t posts as often, but perhaps also because people tend to “me too” Eliezer a lot more often than they “me too” Robin (but again this might be because EY posts much more frequently than RH).
It’s simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)
Please note that this isn’t a denigration—I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn’t make the effect go away.
(To a certain extent, it’s just audience-selection—expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don’t care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin’s posts, but they don’t cause me to feel the same degree of personal connection to their author.)
I do believe I under appreciate Robin. However, what it feels like to me is that my personality at I suspect a gentic level is more similar to that of Eleizer than of Robin. In particular my impression of Robin is that he is more talented than Eleizer at social kinds of cognition. That does not mean I think Robin is less rational. It means that when I read Eleizer’s work I think “yeah, that’s bloody obvious!” whereas some of Robin’s significant contributions I actually have to actively account for my own biasses and work to consider his expertise and that of those he refers to.
My suspicion is that people who have similar minds to Robin would be less inclined to be involved in rationalist discourse than the more instinctively individualist. This accounts somewhat for the differences in ’me too’s but if anything makes Robin more remarkable.
“If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?”
It depends greatly on what they’re agreeing with, and what they’ve said and done before.