I think this thought has analogues in Bayesian statistics.
We choose a prior. Let’s say, for the effect size of a treatment. What’s our prior? Let’s say, Gaussian with mean 0, and standard deviation equal to the typical effect size for this kind of treatment.
But how do we know that typical effect size? We could actually treat this prior as a posterior, updated from a uniform prior by previous studies. This would be a Bayesian meta-analysis.
I’ve never seen anyone formally do a meta-analysis just to get a prior. At some point, you decide your assumed probability distributions are close enough, that more effort wouldn’t change the final result. Really, all mathematical modeling is like this. We model the Earth as a point, or a sphere, or a more detailed shape, depending on what we can get away with in our application. We make this judgment informally, but we expect a formal analysis to back it up.
As for these ranges and bounds… that reminds me of the robustness analysis they do in Bayesian statistics. That is, vary the prior and see how it effects the posterior. Generally done within a parametric family of priors, so you just vary the parameters. The hope is that you get about the same results within some reasonable range of priors, but you don’t get strict bounds.
I like these observations! As for your last point about ranges and bounds, I’m actually moving towards relaxing those in future posts: basically I want to look at the tree case where you have more than one variable feeding into each node and I want to argue that even if the conditional probabilities are all 0′s and 1′s (so we don’t get any hard bounds with arguments like the one I present here) there can still be strong concentration towards one answer.
I think this thought has analogues in Bayesian statistics.
We choose a prior. Let’s say, for the effect size of a treatment. What’s our prior? Let’s say, Gaussian with mean 0, and standard deviation equal to the typical effect size for this kind of treatment.
But how do we know that typical effect size? We could actually treat this prior as a posterior, updated from a uniform prior by previous studies. This would be a Bayesian meta-analysis.
I’ve never seen anyone formally do a meta-analysis just to get a prior. At some point, you decide your assumed probability distributions are close enough, that more effort wouldn’t change the final result. Really, all mathematical modeling is like this. We model the Earth as a point, or a sphere, or a more detailed shape, depending on what we can get away with in our application. We make this judgment informally, but we expect a formal analysis to back it up.
As for these ranges and bounds… that reminds me of the robustness analysis they do in Bayesian statistics. That is, vary the prior and see how it effects the posterior. Generally done within a parametric family of priors, so you just vary the parameters. The hope is that you get about the same results within some reasonable range of priors, but you don’t get strict bounds.
I like these observations! As for your last point about ranges and bounds, I’m actually moving towards relaxing those in future posts: basically I want to look at the tree case where you have more than one variable feeding into each node and I want to argue that even if the conditional probabilities are all 0′s and 1′s (so we don’t get any hard bounds with arguments like the one I present here) there can still be strong concentration towards one answer.