Well, I would find it really awkward for a Bayesian to condone a modus operandi such as “The p-value of 0.15 indicates it is much more likely that there is a correlation than that the result is due to chance, however for all intents and purposes the scientific community will treat the correlation as non-existent, since we’re not sufficiently certain of it (even though it likely exists)”.
And this is a really, really great reason not to identify yourself as “Bayesian”. You end up not using effective methods when you can’t derive them from Bayes theorem. (Which is to be expected absent very serious training in deriving things).
Better to check out a few false candidates too many than to falsely dismiss important new discoveries
Where do you think the funds for testing false candidates are going to come from? If you are checking too many false candidates, you are dismissing important new discoveries. You are also robbing time away from any exploration into the unexplored space.
edit: also I think you overestimate the extent to which promising avenues of research are “closed” by a failure to confirm. It is understood that a failure can result from a multitude of causes. Keep in mind also that with a strong effect, you have quadratically better p-value for the same sample size. You are at much less of a risk of dismissing strong results.
And this is a really, really great reason not to identify yourself as “Bayesian”. You end up not using effective methods when you can’t derive them from Bayes theorem. (Which is to be expected absent very serious training in deriving things).
Where do you think the funds for testing false candidates are going to come from? If you are checking too many false candidates, you are dismissing important new discoveries. You are also robbing time away from any exploration into the unexplored space.
edit: also I think you overestimate the extent to which promising avenues of research are “closed” by a failure to confirm. It is understood that a failure can result from a multitude of causes. Keep in mind also that with a strong effect, you have quadratically better p-value for the same sample size. You are at much less of a risk of dismissing strong results.