This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Humans are still confused enough about the world that there is room for change in our current understanding of physics
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
Physicists are not confused in the relevant regimes here.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.
The paper … is the culmination of eight years’ work by Daryl Bem of Cornell University in Ithaca, New York.
Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image’s eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
It just sounds like it took eight years to get the large sample size he wanted.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
What evidence is there for this?
From here,
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.