If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.
If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.