But for hypotheses and models, I ask myself “plausibility of what? Being true?”
Plausibility of being true given the prior information. Just as Aristotelian logic gives valid arguments (but not necessarily sound ones), Bayes’s theorem gives valid but not necessarily sound plausibility assessments.
following this path shades into decision theory
That’s pretty much why I wanted to make the distinction between plausibility and usefulness. One of the things I like about the Cox-Jaynes approach is that it cleanly splits inference and decision-making apart.
Plausibility of being true given the prior information.
Okay, sure we can go back to the Bayesian mantra of “all probabilities are conditional probabilities”. But our prior information effectively includes the statement that one of our models is the “true one”. And that’s never the actual case, so our arguments are never sound in this sense, because we are forced to work from prior information that isn’t true. This isn’t a huge problem, but it in some sense undermines the motivation for finding these probabilities and treating them seriously—they’re conditional probabilities being applied in a case where we know that what is being conditioned on is false. What is the grounding to our actual situation? I like to take the stance that in practice this is still useful—as an approximation procedure—sorting through models that are approximately right.
And that’s never the actual case, so our arguments are never sound in this sense, because we are forced to work from prior information that isn’t true.
One does generally resort to non-Bayesian model checking methods. Andrew Gelman likes to include such checks under the rubric of “Bayesian data analysis”; he calls the computing of posterior probabilities and densities “Bayesian inference”, a preceding subcomponent of Bayesian data analysis. This makes for sensible statistical practice, but the underpinnings aren’t strong. One might consider it an attempt to approximate the Solomonoff prior.
Yes, in practice people resort to less motivated methods that work well.
I’d really like to see some principled answer that has the same feel as Bayesianism though. As it stands, I have no problem using Bayesian methods for parameter estimation. This is natural because we really are getting pdf(parameters | data, model). But for model selection and evaluation (i.e. non-parametric Bayes) I always feel that I need an “escape hatch” to include new models that the Bayes formalism simply doesn’t have any place for.
Plausibility of being true given the prior information. Just as Aristotelian logic gives valid arguments (but not necessarily sound ones), Bayes’s theorem gives valid but not necessarily sound plausibility assessments.
That’s pretty much why I wanted to make the distinction between plausibility and usefulness. One of the things I like about the Cox-Jaynes approach is that it cleanly splits inference and decision-making apart.
Okay, sure we can go back to the Bayesian mantra of “all probabilities are conditional probabilities”. But our prior information effectively includes the statement that one of our models is the “true one”. And that’s never the actual case, so our arguments are never sound in this sense, because we are forced to work from prior information that isn’t true. This isn’t a huge problem, but it in some sense undermines the motivation for finding these probabilities and treating them seriously—they’re conditional probabilities being applied in a case where we know that what is being conditioned on is false. What is the grounding to our actual situation? I like to take the stance that in practice this is still useful—as an approximation procedure—sorting through models that are approximately right.
One does generally resort to non-Bayesian model checking methods. Andrew Gelman likes to include such checks under the rubric of “Bayesian data analysis”; he calls the computing of posterior probabilities and densities “Bayesian inference”, a preceding subcomponent of Bayesian data analysis. This makes for sensible statistical practice, but the underpinnings aren’t strong. One might consider it an attempt to approximate the Solomonoff prior.
Yes, in practice people resort to less motivated methods that work well.
I’d really like to see some principled answer that has the same feel as Bayesianism though. As it stands, I have no problem using Bayesian methods for parameter estimation. This is natural because we really are getting pdf(parameters | data, model). But for model selection and evaluation (i.e. non-parametric Bayes) I always feel that I need an “escape hatch” to include new models that the Bayes formalism simply doesn’t have any place for.
I feel the same way.