I was responding to the original post, which said:
You repeated some failed experiments in the hope of getting a different result. Multiple hypotheses, file drawer effect, motivated cognition, motivated stopping, researcher degrees of freedom, remining of old data: there is hardly a methodological sin you have not committed.
I realize my wording may have been suboptimal, but some of these biases (such as multiple comparisons) only make sense in a frequentist framework.
I was trying to explain why some of these methodological problems do not even apply in this example. It is not a matter of other evidence is strong enough to outweight the methodological flaws. These methodological flaws are irrelevant to questions about the individuals data points.
For example, problems arising from biased stopping rules would arise if you were trying to estimate the proportion of all locations that contain keys that open your door. However, a biased stopping rule makes absolutely no difference for the integrity of the individual data points.
I was responding to the original post, which said:
I realize my wording may have been suboptimal, but some of these biases (such as multiple comparisons) only make sense in a frequentist framework.
I was trying to explain why some of these methodological problems do not even apply in this example. It is not a matter of other evidence is strong enough to outweight the methodological flaws. These methodological flaws are irrelevant to questions about the individuals data points.
For example, problems arising from biased stopping rules would arise if you were trying to estimate the proportion of all locations that contain keys that open your door. However, a biased stopping rule makes absolutely no difference for the integrity of the individual data points.