I disagree with the quoted part of the post. Science doesn’t reject your bayesian conclusion (provided it is rational), it’s simply unsatisfied by the fact that it’s a probabilistic conclusion. That is, probabilistic conclusions are never knowledge of truth. They are estimations of the likelihood of truth. Science will look at your bayesian conclusion and say “99% confident? That’s good!, but lets gather more data and raise the bar to 99.9%!). Science is the constant pursuit of knowledge. It will never reach it it, but it will demand we never stop trying to get closer.
Beyond that, I think in a great many cases (not all) there are also some inherent problems in using explicit bayesian (or otherwise) reasoning for models of reality because we simply have no idea what the space of hypotheses could be. As is such, the best bayesian can ever do in this context is give an ordering of models (e.g., this model is better than this model), not definitive probabilities. This doesn’t mean science rejects correct bayesian reasoning for the reason previously stated, but it would mean that you can’t get definitive probabilistic conclusions with bayesian reasoning in the first place for many contexts.
I disagree with the quoted part of the post. Science doesn’t reject your bayesian conclusion (provided it is rational), it’s simply unsatisfied by the fact that it’s a probabilistic conclusion. That is, probabilistic conclusions are never knowledge of truth. They are estimations of the likelihood of truth. Science will look at your bayesian conclusion and say “99% confident? That’s good!, but lets gather more data and raise the bar to 99.9%!). Science is the constant pursuit of knowledge. It will never reach it it, but it will demand we never stop trying to get closer.
Beyond that, I think in a great many cases (not all) there are also some inherent problems in using explicit bayesian (or otherwise) reasoning for models of reality because we simply have no idea what the space of hypotheses could be. As is such, the best bayesian can ever do in this context is give an ordering of models (e.g., this model is better than this model), not definitive probabilities. This doesn’t mean science rejects correct bayesian reasoning for the reason previously stated, but it would mean that you can’t get definitive probabilistic conclusions with bayesian reasoning in the first place for many contexts.