I support your focus on testing confirmation bias, but I don’t think that it was worth it to explicitly falsify results (for a short time), compared to saying “oh well” and repeating the process until you do legitimately get an inconvenient result on a self-experiment. You’ve demonstrated that you’re willing to break the taboo (or injunction) of never falsifying object-level results of scientific experience, which makes all of your data less valuable.
I found this to be a good and informative post, nonetheless.
I found this to be a good and informative post, nonetheless.
Really? Are you really surprised that people are reluctant to broadcast data that doesn’t fit their theory? Have you read any political blogs?
By my model, it takes a pretty unusual person to give anywhere near equal weight to confirming and disconfirming evidence. We’re holding Seth Roberts to a very high standard here—one that Gwern himself has not necessarily achieved. Criticizing is easy.
This is a great example of what Frank Adamek talked about in his recent post re: lowering other people’s status. The reason folks subconsciously avoid disconfirming evidence is so they can preserve their status. In an ideal world preserving status would be a nonissue and disconfirming evidence would be fine. But then someone like Gwern comes along and snipes someone’s status, validating the concern with status that leads to confirmation bias in the first place.
(Stop violating useful social norms Gwern! Punish the norm violator! Just kidding, saying that would make me a hypocrite. I’ll assume Gwern posted this in good faith and didn’t mean to erode useful social norms.)
So can future articles on individual irrationality please be restricted to people writing about themselves?
To clarify: I’m in support of doing psychological tests on small scales and writing up the results on Less Wrong. I’m not in support of breaking certain ethical injunctions in the process.
If gwern had legitimately gotten a different self-experiment than Seth Roberts, and then the same process had transpired, I’d be entirely in favor of this post. It’s an important caveat to self-experimentation that you need to really watch out for confirmation bias, and trust people more if they’re willing to publicize negative results as well as positive ones.
But falsifying results to achieve that, even temporarily, was a bad choice (and it makes me less willing to invest my time in reading gwern’s self-experimentation in the future).
Since I’m objecting, I may as well clarify: I would appreciate it if people told me unimportant lies (that were corrected later) in order to test me for biases, as long as the results of the test were just between the two of us, and possibly also in other circumstances. (Let’s say you have to pay me one dollar for each additional person who knows up to the first 20 people, with additional people being free after the first 20.)
If I were Seth Roberts, I would look into my blog archive for the initial anecdotal results I posted on experiments now proven to have negative results. If most of these posts seemed positive, I probably have confirmation bias.
I don’t think that can be done, since I don’t know of any of his theories which have been ‘now proven to have negative results’. I think a post linked somewhere here accuses Roberts of actively avoiding clinical trials, where Roberts replies that he worked with a SUNY professor on 20 case-studies for the Shangri-La diet. Since the diet is his centerpiece and the subject of his only book (AFAIK), it probably represents the best-case testing of any of his theories...
I support your focus on testing confirmation bias, but I don’t think that it was worth it to explicitly falsify results (for a short time), compared to saying “oh well” and repeating the process until you do legitimately get an inconvenient result on a self-experiment. You’ve demonstrated that you’re willing to break the taboo (or injunction) of never falsifying object-level results of scientific experience, which makes all of your data less valuable.
I found this to be a good and informative post, nonetheless.
Really? Are you really surprised that people are reluctant to broadcast data that doesn’t fit their theory? Have you read any political blogs?
By my model, it takes a pretty unusual person to give anywhere near equal weight to confirming and disconfirming evidence. We’re holding Seth Roberts to a very high standard here—one that Gwern himself has not necessarily achieved. Criticizing is easy.
This is a great example of what Frank Adamek talked about in his recent post re: lowering other people’s status. The reason folks subconsciously avoid disconfirming evidence is so they can preserve their status. In an ideal world preserving status would be a nonissue and disconfirming evidence would be fine. But then someone like Gwern comes along and snipes someone’s status, validating the concern with status that leads to confirmation bias in the first place.
(Stop violating useful social norms Gwern! Punish the norm violator! Just kidding, saying that would make me a hypocrite. I’ll assume Gwern posted this in good faith and didn’t mean to erode useful social norms.)
So can future articles on individual irrationality please be restricted to people writing about themselves?
To clarify: I’m in support of doing psychological tests on small scales and writing up the results on Less Wrong. I’m not in support of breaking certain ethical injunctions in the process.
If gwern had legitimately gotten a different self-experiment than Seth Roberts, and then the same process had transpired, I’d be entirely in favor of this post. It’s an important caveat to self-experimentation that you need to really watch out for confirmation bias, and trust people more if they’re willing to publicize negative results as well as positive ones.
But falsifying results to achieve that, even temporarily, was a bad choice (and it makes me less willing to invest my time in reading gwern’s self-experimentation in the future).
Since I’m objecting, I may as well clarify: I would appreciate it if people told me unimportant lies (that were corrected later) in order to test me for biases, as long as the results of the test were just between the two of us, and possibly also in other circumstances. (Let’s say you have to pay me one dollar for each additional person who knows up to the first 20 people, with additional people being free after the first 20.)
How would you test yourself for confirmation bias?
If I were Seth Roberts, I would look into my blog archive for the initial anecdotal results I posted on experiments now proven to have negative results. If most of these posts seemed positive, I probably have confirmation bias.
I don’t think that can be done, since I don’t know of any of his theories which have been ‘now proven to have negative results’. I think a post linked somewhere here accuses Roberts of actively avoiding clinical trials, where Roberts replies that he worked with a SUNY professor on 20 case-studies for the Shangri-La diet. Since the diet is his centerpiece and the subject of his only book (AFAIK), it probably represents the best-case testing of any of his theories...
EDIT: http://andrewgelman.com/2010/03/clippin_it/#comment-53303
Perhaps you’re thinking of Andrew Gelman’s recent post.