When our predictions fail in a big way, this is usually evidence that our model needs to be updated in the direction of more complexity.
Well, no. When predictions fail in a big way, this is usually evidence that your model is wrong and needs to be discarded. Adding epicycles helps only if you already have the basic things right and failing in a big way shows that you do NOT have the basic things right.
Well what I meant by complexity was specifically not to add epicycles, or excuses or special cases to a model we really like, but rather, replacing the model with one that has more power and precision. Yes, sometimes that means we switch to a simpler theory—heliocentrism required fewer parameters than geocentrism to make it work initially—but in general the long term trend seems to be towards models with more parameters. That doesn’t mean throw away Occam’s razor—just that more accurate predictions usually require a model with more knobs and levers. And that may only be because we now need to model more interactions than were there originally. Maybe our system has become entangled with another system that it wasn’t interacting with before.
If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.
Well, no. When predictions fail in a big way, this is usually evidence that your model is wrong and needs to be discarded. Adding epicycles helps only if you already have the basic things right and failing in a big way shows that you do NOT have the basic things right.
Well what I meant by complexity was specifically not to add epicycles, or excuses or special cases to a model we really like, but rather, replacing the model with one that has more power and precision. Yes, sometimes that means we switch to a simpler theory—heliocentrism required fewer parameters than geocentrism to make it work initially—but in general the long term trend seems to be towards models with more parameters. That doesn’t mean throw away Occam’s razor—just that more accurate predictions usually require a model with more knobs and levers. And that may only be because we now need to model more interactions than were there originally. Maybe our system has become entangled with another system that it wasn’t interacting with before.
If you previous model crashed and burned, you do not need another one with “more power and precision”, you need one which works.
It’s common to speak of two axis of development which are called something like evolutionary/revolutionary, continuous/discontinuous, horizontal/vertical, etc. One is incremental improvement, the other is a radical jump. “More power and precision” implies you want to take the incremental improvement route. I’m arguing for the radical jump.
You may not know a priori whether your theory needs a “radical jump” or an “incremental improvement.” But it still seems to be the historical case that theories tend to gain in complexity over the long term. General Relativity is more complex than Newton’s Laws is more complex than Heliocentrism or Geocentrism, and String Theory is more complex than all of those. Multiverse theories add a whole new layer of parameters.
If you have a model that has worked pretty well in one regime but that completely fails in a different regime, then you probably need a new theory that is both a “radical jump” from the previous theory, but that is also likely to be more complex. You are now adding more scenarios and regimes that require prediction under one model. This new model will typically have more parameters than the previous model. As long as you have been operating under Occam’s Razor reasonably well since the beginning—then as you add more variables, or as support of the distribution of the variables increases, your model has to not only operate as well as it did before in the old regime but also work well in the new regime. Think about an exponential growth model—it works just fine in the beginning when your population is small, then fails dramatically in the regime when your population saturates. You update your model to a logistic growth function which captures both regimes better, but it adds a new parameter, namely, the carrying capacity.
I’m not saying this pattern follows in literally every conceivable situation in which a theory fails. Your model may just suck, or be overcomplicated from the start. But, if we have been fairly principled about how we design our models, and gradually expand them to explain more things, then this pattern should generally hold.