In Bayesian statistics there’s the concept of ‘weakly informative priors’, which are priors that are quite broad and conservative, but don’t concentrate almost all of their mass on values that no one thinks are plausible. For example, if I’m estimating the effect of a drug, I might choose priors that give low mass to biologically implausible effect sizes. If it’s a weight gain drug, perhaps I’d pick a normal distribution with less than 1% probability mass for more than 100% weight increase or 50% weight decrease. Still pretty conservative, but mostly captures people’s intuitions of what answers would be crazy.
Sometimes this is pretty useful, and sometimes not. Its going to be most useful when you have not much evidence, and also when your model is not well constrained along some dimensions (such as when you have multiple sources of variance). Its also going to be useful when there are a ton of answers that seem implausible.
In Bayesian statistics there’s the concept of ‘weakly informative priors’, which are priors that are quite broad and conservative, but don’t concentrate almost all of their mass on values that no one thinks are plausible. For example, if I’m estimating the effect of a drug, I might choose priors that give low mass to biologically implausible effect sizes. If it’s a weight gain drug, perhaps I’d pick a normal distribution with less than 1% probability mass for more than 100% weight increase or 50% weight decrease. Still pretty conservative, but mostly captures people’s intuitions of what answers would be crazy.
Andrew Gelman has some recent discussion here.
Sometimes this is pretty useful, and sometimes not. Its going to be most useful when you have not much evidence, and also when your model is not well constrained along some dimensions (such as when you have multiple sources of variance). Its also going to be useful when there are a ton of answers that seem implausible.