Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.
The paper … is the culmination of eight years’ work by Daryl Bem of Cornell University in Ithaca, New York.
Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image’s eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
It just sounds like it took eight years to get the large sample size he wanted.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
What evidence is there for this?
From here,
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.