I think part of the problem is that the scientific community lacks an effective empiric way to distinguish between different ways of statistical reasoning.
As a result a field like social neuroscience degenerated into being largely a cargo-cult science that manages to predict based on brain scans things better than it would be possible theoretically. They achieve this feat by “predicting” the data that was used to train their models.
When it’s as easy to publish with methods that can find patterns where there are none as publishing with methods that require real pattern in the data, scientists in the field will be pressured into the cargo-cult direction.
To get good statistics we would actually need to have a new Gold standard of evaluating whether people know something.
I think part of the problem is that the scientific community lacks an effective empiric way to distinguish between different ways of statistical reasoning.
As a result a field like social neuroscience degenerated into being largely a cargo-cult science that manages to predict based on brain scans things better than it would be possible theoretically. They achieve this feat by “predicting” the data that was used to train their models.
When it’s as easy to publish with methods that can find patterns where there are none as publishing with methods that require real pattern in the data, scientists in the field will be pressured into the cargo-cult direction.
To get good statistics we would actually need to have a new Gold standard of evaluating whether people know something.