So you are bringing up a whole lot of unrelated, or only loosely linked ideas. I’ll be honest, such a long reply of (at best) loosely connected ideas pattern matches to “axe-to-grind” for me, so I strongly considered not bothering with this post. As it is, lets limit the scope to discussing Bem.
Anyway, what exactly do you believe Bem is doing with his paper? I assumed the claim in your first post is that Bem was publishing silly results to highlight the danger of deifying p-values (as Sokal published a silly paper to highlight the low standards of the journal he submitted to). I contend this is not true, and Bem believes the following (based on interviews, the focuses of Bem’s work, and a personal conversation with him):
psi is a real phenomena
ganzfeld experiments (as interpreted through standard statistical significance tests) are strong evidence for psi
“Feeling the Future” and other similar experiments are evidence for precognition
I contend all of these beliefs are mistaken.
In response to further claims you’ve made regarding the academic response to Bem, I further contend:
the academic community is right to be skeptical of such work, and in fact its a sort of informal Bayesian filter.
the academic response raised valid statistical objections to Bem’s work
The biggest problem I see is that an effect has to have as ludicrously small a prior as Bem’s before proper scrutiny is applied. Lots of small effect that warrant closer methodological scrutiny slip through the cracks.
So you are bringing up a whole lot of unrelated, or only loosely linked ideas. I’ll be honest, such a long reply of (at best) loosely connected ideas pattern matches to “axe-to-grind” for me
I don’t think that you can understand the position of people who fundamentally disagree with by reading a single paragraph. Yes, you can find easily a position where they seem to have another opinion than you do, but that doesn’t mean that you understand what they actually believe.
Anyway, what exactly do you believe Bem is doing with his paper?
Bem thinks that academic science is generally not taking the data of their experiments seriously and therefore coming to wrong conclusions in all sorts of domains.
Sokal thinks that the literature department can’t tell true from false. Bem thinks the same is true of the psychology department. He thinks it lacks the same ability.
Sokal is not highliting some specific issue of how one technique that the literature department is using is wrong. His critique of the literature department is more fundamental.
The same goes for Bem. Bem doesn’t just think that academic psychology is wrong on one issue but that it’s flawed on a more fundamental level.
3.”Feeling the Future” and other similar experiments are evidence for precognition
Any good Bayesian holds that belief. If you look at a Lesswrong defence on what people learned from becoming Bayesian you will find:
Banish talk like “There is absolutely no evidence for that belief”. P(E | H) > P(E) if and only if P(H | E) > P(H). The fact that there are myths about Zeus is evidence that Zeus exists. Zeus’s existing would make it more likely for myths about him to arise, so the arising of myths about him must make it more likely that he exists.
There are a lot of people in academia who don’t hold that belief and who aren’t good Bayesians. Bem is completely on the right side on that point.
I don’t think that you can understand the position of people who fundamentally disagree with by reading a single paragraph.
I didn’t claim to. What I claim in what you quoted is that dragging in a concept like evidenced based medicine and climate science isn’t going to help anything in a discussion of Bem’s paper.
Bem thinks that academic science is generally not taking the data of their experiments seriously and therefore coming to wrong conclusions in all sorts of domains.
I would phrase this differently. Bem believes that an informal Bayesian filter (extraordinary claims require extraordinary evidence) is causing academic psychology to unfairly conclude that psi phenomena aren’t real. He wants us to ignore the incredibly low prior for psi, and use weak but statistically significant effects to push us to “psi is probable.”
I don’t agree with this, as I’ve hopefully made clear.
Any good Bayesian holds that belief.
Not necessarily true- a good Bayesian who has read the paper could conclude the methodology is flawed enough that its not much evidence of anything (which was also largely the academic psychology response). I believe the methodology of “Feeling the Future” was so flawed that it isn’t evidence for anything. The replication attempts that failed further reinforce this belief.
Bem believes that an informal Bayesian filter (extraordinary claims require extraordinary evidence)
Bem does not believe that most researchers really follow extraordinary claims require extraordinary evidence. He believes that many of the relevant researches won’t be convinced regardles of what evidence is provided.
He might be wrong about that belief but saying that he believes that most researchers would be convinced be reasonable data misunderstands Bem.
methodology is flawed enough that its not much evidence of anything
Not much evidence and no evidence are two different things. If he believes it’s evidence and you don’t he’s right. It might not be much evidence but it’s evidence in the bayesian sense.
If you debate with him in person and pretend it’s no evidence he will continue to say it’s evidence and be right. That will prevent the discussion to come to the questions that actually matter of how strong the evidence happens to be.
The replication attempts that failed further reinforce this belief.
At university we did a failed attempt to replicate PCR. It really made the postdoc who was running the experiement ashamed that she couldn’t get it right and that it failed for some reason unknown to her. In no way does this concludes that PCR doesn’t work.
As far as replication goes Bem also seems to think that there were successful replication attempts:
What Wiseman never tells people is in Ritchie, Wiseman and French is that his online registry where he asked everyone to register, first of all he provided a deadline date. I don’t know of any serious researcher working on their own stuff who is going to drop everything and immediately do a replication… anyway, he and Ritchie and French published these three studies. Well, they knew that there were three other studies that had been submitted and completed and two of the three showed statistically significant results replicating my results.
If you have a very strange effect that you don’t understand and can’t pin down having 2 of 6 replication attempts be successful does not really prove that there no effect.
If something can go wrong and a method like PCR that’s done millions of times fails to replicated without knowledgeable people knowing why, failing to replicate a very new effect doesn’t mean much. Trying to pin down the difference between the 2 successful and the 4 failed replication attempts might be in order. At least that where I would focus my attention when I’m not attached to the outcome. It may very well turn out that there no real effect in the end but there seems to be more than nothing.
From the same interview of Bem I linked to above (but by the moderator):
How ironic that would be, since one of the strategies of the debunkers has long been to psychologize the phenomena and say, “We don’t need to study the phenomena. We need to study these weird people who report and believe these weird things.” Wouldn’t it be ironic if it turns out we need to look at the beliefs and psychology of the experimenters and why they don’t believe and why they don’t get these effects.
Again that not that much different from the way Sokal sees the literature department.
So you are bringing up a whole lot of unrelated, or only loosely linked ideas. I’ll be honest, such a long reply of (at best) loosely connected ideas pattern matches to “axe-to-grind” for me, so I strongly considered not bothering with this post. As it is, lets limit the scope to discussing Bem.
Anyway, what exactly do you believe Bem is doing with his paper? I assumed the claim in your first post is that Bem was publishing silly results to highlight the danger of deifying p-values (as Sokal published a silly paper to highlight the low standards of the journal he submitted to). I contend this is not true, and Bem believes the following (based on interviews, the focuses of Bem’s work, and a personal conversation with him):
psi is a real phenomena
ganzfeld experiments (as interpreted through standard statistical significance tests) are strong evidence for psi
“Feeling the Future” and other similar experiments are evidence for precognition
I contend all of these beliefs are mistaken.
In response to further claims you’ve made regarding the academic response to Bem, I further contend:
the academic community is right to be skeptical of such work, and in fact its a sort of informal Bayesian filter.
the academic response raised valid statistical objections to Bem’s work
The biggest problem I see is that an effect has to have as ludicrously small a prior as Bem’s before proper scrutiny is applied. Lots of small effect that warrant closer methodological scrutiny slip through the cracks.
I don’t think that you can understand the position of people who fundamentally disagree with by reading a single paragraph. Yes, you can find easily a position where they seem to have another opinion than you do, but that doesn’t mean that you understand what they actually believe.
Bem thinks that academic science is generally not taking the data of their experiments seriously and therefore coming to wrong conclusions in all sorts of domains.
Sokal thinks that the literature department can’t tell true from false. Bem thinks the same is true of the psychology department. He thinks it lacks the same ability.
Sokal is not highliting some specific issue of how one technique that the literature department is using is wrong. His critique of the literature department is more fundamental. The same goes for Bem. Bem doesn’t just think that academic psychology is wrong on one issue but that it’s flawed on a more fundamental level.
Any good Bayesian holds that belief. If you look at a Lesswrong defence on what people learned from becoming Bayesian you will find:
There are a lot of people in academia who don’t hold that belief and who aren’t good Bayesians. Bem is completely on the right side on that point.
I didn’t claim to. What I claim in what you quoted is that dragging in a concept like evidenced based medicine and climate science isn’t going to help anything in a discussion of Bem’s paper.
I would phrase this differently. Bem believes that an informal Bayesian filter (extraordinary claims require extraordinary evidence) is causing academic psychology to unfairly conclude that psi phenomena aren’t real. He wants us to ignore the incredibly low prior for psi, and use weak but statistically significant effects to push us to “psi is probable.”
I don’t agree with this, as I’ve hopefully made clear.
Not necessarily true- a good Bayesian who has read the paper could conclude the methodology is flawed enough that its not much evidence of anything (which was also largely the academic psychology response). I believe the methodology of “Feeling the Future” was so flawed that it isn’t evidence for anything. The replication attempts that failed further reinforce this belief.
Bem does not believe that most researchers really follow extraordinary claims require extraordinary evidence. He believes that many of the relevant researches won’t be convinced regardles of what evidence is provided.
He might be wrong about that belief but saying that he believes that most researchers would be convinced be reasonable data misunderstands Bem.
Not much evidence and no evidence are two different things. If he believes it’s evidence and you don’t he’s right. It might not be much evidence but it’s evidence in the bayesian sense.
If you debate with him in person and pretend it’s no evidence he will continue to say it’s evidence and be right. That will prevent the discussion to come to the questions that actually matter of how strong the evidence happens to be.
At university we did a failed attempt to replicate PCR. It really made the postdoc who was running the experiement ashamed that she couldn’t get it right and that it failed for some reason unknown to her. In no way does this concludes that PCR doesn’t work.
As far as replication goes Bem also seems to think that there were successful replication attempts:
If you have a very strange effect that you don’t understand and can’t pin down having 2 of 6 replication attempts be successful does not really prove that there no effect. If something can go wrong and a method like PCR that’s done millions of times fails to replicated without knowledgeable people knowing why, failing to replicate a very new effect doesn’t mean much. Trying to pin down the difference between the 2 successful and the 4 failed replication attempts might be in order. At least that where I would focus my attention when I’m not attached to the outcome. It may very well turn out that there no real effect in the end but there seems to be more than nothing.
From the same interview of Bem I linked to above (but by the moderator):
Again that not that much different from the way Sokal sees the literature department.