Obviously the first thing to do is to divide the research into subproblems and research them in parallel. But what if the number of research teams still exceeds the number of real subproblems identified?
This is easy but not necessarily optimal. Sometimes you want to overkill a hypothesis before falling back. Imagine a scenario where the top hypothesis has 50% prior probability, you run an experiment which is powered to have a 10% error rate in definitively accepting/rejecting it, and you could run a second experiment reducing that error to 5%; do you really want to instead spend that experiment testing a dark horse hypothesis with a priority probability of 1%? Probably better to drive that top hypothesis down to <1% first, and the marginal value of other hypotheses becoming larger, before investing in buying lottery tickets.
This is a pretty classic sort of multi-stage decision problem in research and so relevant stuff comes up everywhere depending on how you look at it: it’s related to experiment design, particularly factorial design; to external vs internal validity, especially in meta-analysis where you balance between-study measurement of heterogeneity/systematic error with overcoming within-study random sampling error; to group testing; and to parallelized blackbox optimization (especially in hyperparameter optimization, where you can more easily run many models in parallel than one model really fast) where you have to distribute multiple arms sampling across the loss landscape and need to avoid over-concentrating in narrow regions of settings.
This is easy but not necessarily optimal. Sometimes you want to overkill a hypothesis before falling back. Imagine a scenario where the top hypothesis has 50% prior probability, you run an experiment which is powered to have a 10% error rate in definitively accepting/rejecting it, and you could run a second experiment reducing that error to 5%; do you really want to instead spend that experiment testing a dark horse hypothesis with a priority probability of 1%? Probably better to drive that top hypothesis down to <1% first, and the marginal value of other hypotheses becoming larger, before investing in buying lottery tickets.
This is a pretty classic sort of multi-stage decision problem in research and so relevant stuff comes up everywhere depending on how you look at it: it’s related to experiment design, particularly factorial design; to external vs internal validity, especially in meta-analysis where you balance between-study measurement of heterogeneity/systematic error with overcoming within-study random sampling error; to group testing; and to parallelized blackbox optimization (especially in hyperparameter optimization, where you can more easily run many models in parallel than one model really fast) where you have to distribute multiple arms sampling across the loss landscape and need to avoid over-concentrating in narrow regions of settings.