I haven‘t looked in much depth at your specific analyses but I think trying to present these problems (assuming there are actually problems) as machine learning issues is misguided.
As an example here is an analysis of a research paper I did a couple of years ago. Similarly to your claims about the papers you look at, it is a widely cited paper whose claims are not backed up by the data due to too much flexibility with the parameters used. The paper didn’t use any bayesian analysis, it just tested multiple hypotheses to such an extent that it got significant results by chance. Sloppy research is not a new phenomenon and definitely doesn’t require Bayesian analysis.
If your assertion is that machine learning users are unaware of the dangers of overfitting then I suggest looking at a few online training courses on machine learning where you will find they go on and on and on about overfitting almost to the point of self-parody.
The complaints about using priors in Bayesian statistics are well trodden ground and I think it would be instructive to read up on implicit priors of Frequentist statistics.
I think lumping in data falsification in with machine learning is particularly disingenuous—liars gonna lie, the method isn’t particularly relevant.
So basically I don’t think you’ve identified faults which are specific to machine learning or given evidence that the errors are more prevalent than when using alternative methods (even with the assumption that your specific analyses are correct).
I haven‘t looked in much depth at your specific analyses but I think trying to present these problems (assuming there are actually problems) as machine learning issues is misguided.
As an example here is an analysis of a research paper I did a couple of years ago. Similarly to your claims about the papers you look at, it is a widely cited paper whose claims are not backed up by the data due to too much flexibility with the parameters used. The paper didn’t use any bayesian analysis, it just tested multiple hypotheses to such an extent that it got significant results by chance. Sloppy research is not a new phenomenon and definitely doesn’t require Bayesian analysis.
If your assertion is that machine learning users are unaware of the dangers of overfitting then I suggest looking at a few online training courses on machine learning where you will find they go on and on and on about overfitting almost to the point of self-parody.
The complaints about using priors in Bayesian statistics are well trodden ground and I think it would be instructive to read up on implicit priors of Frequentist statistics.
I think lumping in data falsification in with machine learning is particularly disingenuous—liars gonna lie, the method isn’t particularly relevant.
So basically I don’t think you’ve identified faults which are specific to machine learning or given evidence that the errors are more prevalent than when using alternative methods (even with the assumption that your specific analyses are correct).