This feels a bit like beating up a straw man. Every actual scientific professional develops a sense of what problems are worth working on and what approaches are more or less promising. This sort of intuition isn’t scientifically provable—there’s no way to know in advance what you’ll find—but people can and do give reasons why they think X is more promising than Y. People value things like elegance, simplicity, ease of use, and so forth. Learning these sorts of judgements is one of the major things people do as PhD students and junior researchers. It may not be part of the scientific method, strictly defined, but it’s something we deliberately teach apprentice scientists.
You can formalize those technical judgements in terms of Solomonof priors and expected utilities if you like, but doing so is a little silly. Different people have different computational hardware and therefore different measures of complexity. Saying “X has a lower Kolmogorov complexity than Y, for me”, is no more or less objective than “X seems simpler”.
There’s also something a little silly about saying “Science isn’t good enough, use Bayes”. General Bayesian updating is intractable. So you can’t use it. All you can ever really do is crude approximations. I don’t think you gain a lot by dressing up your judgement in a mathematical formalism that doesn’t really do any work for you.
This feels a bit like beating up a straw man. Every actual scientific professional develops a sense of what problems are worth working on and what approaches are more or less promising. This sort of intuition isn’t scientifically provable—there’s no way to know in advance what you’ll find—but people can and do give reasons why they think X is more promising than Y. People value things like elegance, simplicity, ease of use, and so forth. Learning these sorts of judgements is one of the major things people do as PhD students and junior researchers. It may not be part of the scientific method, strictly defined, but it’s something we deliberately teach apprentice scientists.
You can formalize those technical judgements in terms of Solomonof priors and expected utilities if you like, but doing so is a little silly. Different people have different computational hardware and therefore different measures of complexity. Saying “X has a lower Kolmogorov complexity than Y, for me”, is no more or less objective than “X seems simpler”.
There’s also something a little silly about saying “Science isn’t good enough, use Bayes”. General Bayesian updating is intractable. So you can’t use it. All you can ever really do is crude approximations. I don’t think you gain a lot by dressing up your judgement in a mathematical formalism that doesn’t really do any work for you.