You’re making a massive assumption: that self-experimentation is not biased worse than regular clinical trials by things like selection effects. This is what I mean by methodological concerns making each self-experiment far far less than n=1. I mean, look at OP—from the sound of it, the friend did not report their results anywhere (perhaps because they were null?). Bingo, publication effect. People don’t want to discuss null effects, they want to discuss positive results. I’ve seen this first-hand with dual n-back, among others, where I had trouble eliciting the null results even though they existed.
Given this sort of bias and zero effort on self-experimenters’ part to counter it, yes, you absolutely could do far worse than random by sampling 1000 self-experimenters compared to 1000 clinical trial participants! This is especially true for highly variable stuff like sleep, where you can spot any trend you like in all the noise—compare the dramatic confident anecdotes collected by Seth Roberts about vitamin D at night based on purely subjective retrospective recall of <10 nights to my actual relatively moderate findings based on 40 nights of Zeo data.
(I actually have a little demonstration that someone is engaging in considerable confirmation bias, but I’m not done yet. I should be able to post the result in early May.)
You’re making a massive assumption: that self-experimentation is not biased worse than regular clinical trials by things like selection effects. This is what I mean by methodological concerns making each self-experiment far far less than n=1. I mean, look at OP—from the sound of it, the friend did not report their results anywhere (perhaps because they were null?). Bingo, publication effect. People don’t want to discuss null effects, they want to discuss positive results. I’ve seen this first-hand with dual n-back, among others, where I had trouble eliciting the null results even though they existed.
Given this sort of bias and zero effort on self-experimenters’ part to counter it, yes, you absolutely could do far worse than random by sampling 1000 self-experimenters compared to 1000 clinical trial participants! This is especially true for highly variable stuff like sleep, where you can spot any trend you like in all the noise—compare the dramatic confident anecdotes collected by Seth Roberts about vitamin D at night based on purely subjective retrospective recall of <10 nights to my actual relatively moderate findings based on 40 nights of Zeo data.
(I actually have a little demonstration that someone is engaging in considerable confirmation bias, but I’m not done yet. I should be able to post the result in early May.)
I don’t necessarily disagree with you on any of this. Looks to me like we are talking past each other a little bit.