Hard sciences (basically physics and its relatives), are far less vulnerable to statistical pitfalls because practitioners in those fields have the ability to generate effectively unlimited quantities of data by simply repeating experiments as many times as necessary.
There are exceptions such as ultra-high-energy cosmic ray physics, where it’d take decades to take enough data for naive frequentist statistics to be reliable.
The statistics also remains important at the frontier of high energy physics. Trying to do reasoning about what models are likely to replace the Standard Model is plagued by every issue in the philosophy of statistics that you can imagine. And the arguments about this affect where billions of dollars worth of research funding end up (build bigger colliders? more dark matter detectors? satellites?)
There are exceptions such as ultra-high-energy cosmic ray physics, where it’d take decades to take enough data for naive frequentist statistics to be reliable.
The statistics also remains important at the frontier of high energy physics. Trying to do reasoning about what models are likely to replace the Standard Model is plagued by every issue in the philosophy of statistics that you can imagine. And the arguments about this affect where billions of dollars worth of research funding end up (build bigger colliders? more dark matter detectors? satellites?)
Sure; if we had enough data to conclusively answer a question it would no longer be at the frontier. :-)
(I disagree with several of the claims in the linked post, but that’s another story.)