“Cognitive psychology is the field that consists of experimental cognitive psychology, cognitive neuroscience, cognitive neuropsychology, and computational cognitive science” was the breakdown used in my cognitive psychology textbook (relatively influential, cited 3651 times according to Google Scholar). There’s also substantial overlap in the experimental setups: as in many of the experiments mentioned in the post, lots of cognitive neuroscience experiments are such that even if you removed the brain imaging part, the behavioral component of it could still pass on its own as an experimental cognitive psychology finding. Similarly, the book cites a combination of neuroimaging and behavioral results in order to build up its theory; many of the priming experiments that I discuss, also show up in that list of replicated cognitive psychology experiments.
Re: the voodoo correlations paper—I haven’t read it myself, but my understanding from online discussion is that the main error that it discusses apparently onlymodestlyoverstates the strength of some correlations; it doesn’t actually cause entirely spurious correlations to be reported. The paper also separately discusses another error which was more serious, but only names a single paper which was guilty of that error, which isn’t very damning. So I see the paper mostly as an indication of the field being self-correcting, with flaws in its methodologies being pointed out and then improved upon.
The voodoo paper starts by noting that the social neuroscience papers regularly report values that are higher then the theoretical maximum.
I find a defense of neuroscience against the Voodoo paper that ignores that the charge of the Voodoo paper that the results of the claimed social neuroscience papers achieve impossible results (you could call them paranormal), to be no good defense.
Whether or not it causes entirely spurious correlations to be reported depends on the degrees of freedom that models have. If you have a dataset with 200 patients and 2000 degrees of freedom in your mathematical model. The neuroscience folks often use statistical techniques where there’s no mathematically sound method to assess the degrees of freedom. Frequently, they run some simulated data through the model to eyeball the amount of the problem, but there are no mathematical guarantees that this will find every case when the degrees of freedom are two high.
Even if you grant it’s only modest overstating. Scientists are generally not expected to modestly overstate their results but are supposed to remove systematic effects that make them overstate their results.
Even if you think that there’s some value in predicting training data, they could still run a second test where they split their data into two a trainings data pile and an evaluation pile and run their model again and report the results. It’s not much work as they don’t need to create a new model. It’s 4 lines of R (maybe even less if you write it concisely).
“Cognitive psychology is the field that consists of experimental cognitive psychology, cognitive neuroscience, cognitive neuropsychology, and computational cognitive science” was the breakdown used in my cognitive psychology textbook (relatively influential, cited 3651 times according to Google Scholar). There’s also substantial overlap in the experimental setups: as in many of the experiments mentioned in the post, lots of cognitive neuroscience experiments are such that even if you removed the brain imaging part, the behavioral component of it could still pass on its own as an experimental cognitive psychology finding. Similarly, the book cites a combination of neuroimaging and behavioral results in order to build up its theory; many of the priming experiments that I discuss, also show up in that list of replicated cognitive psychology experiments.
Re: the voodoo correlations paper—I haven’t read it myself, but my understanding from online discussion is that the main error that it discusses apparently only modestly overstates the strength of some correlations; it doesn’t actually cause entirely spurious correlations to be reported. The paper also separately discusses another error which was more serious, but only names a single paper which was guilty of that error, which isn’t very damning. So I see the paper mostly as an indication of the field being self-correcting, with flaws in its methodologies being pointed out and then improved upon.
The voodoo paper starts by noting that the social neuroscience papers regularly report values that are higher then the theoretical maximum.
I find a defense of neuroscience against the Voodoo paper that ignores that the charge of the Voodoo paper that the results of the claimed social neuroscience papers achieve impossible results (you could call them paranormal), to be no good defense.
Whether or not it causes entirely spurious correlations to be reported depends on the degrees of freedom that models have. If you have a dataset with 200 patients and 2000 degrees of freedom in your mathematical model. The neuroscience folks often use statistical techniques where there’s no mathematically sound method to assess the degrees of freedom. Frequently, they run some simulated data through the model to eyeball the amount of the problem, but there are no mathematical guarantees that this will find every case when the degrees of freedom are two high.
Even if you grant it’s only modest overstating. Scientists are generally not expected to modestly overstate their results but are supposed to remove systematic effects that make them overstate their results.
Even if you think that there’s some value in predicting training data, they could still run a second test where they split their data into two a trainings data pile and an evaluation pile and run their model again and report the results. It’s not much work as they don’t need to create a new model. It’s 4 lines of R (maybe even less if you write it concisely).