I really like this topic, and I’m really glad you brought it up; it probably even deserves its own post.
There are definitely some people who are trying this, or similar approaches. I’m pretty sure it’s one of the end goals of Stephen Wolfram’s “New Kind of Science” and the idea of high-throughput searching of data for latent mathematical structure is definitely in vogue in several sub-branches of physics.
With that being said, while the idea has caught people’s interest, it’s far from obvious that it will work. There are a number of difficulties and open questions, both with the general method and the specific instance you outline.
As far as I know, (1) we assume that the laws of the universe are simple
It’s not clear that this is a good assumption, and it’s not totally clear what exactly it means. There are a couple of difficulties:
a.) We know that the universe exhibits regular structure on some length and time scales, but that’s almost certainly a necessary condition for the evolution of complex life, and the anthropic principle makes that very weak evidence that the universe exhibits similar regular structure on all length/time/energy scales. While clever arguments based on the anthropic principle are typically profoundly unsatisfying, the larger point is that we don’t know that the universe is entirely regular/mathematical/computable and it’s not clear that we have strong evidence to believe it is. As an example, we know that a vanishingly small percentage of real numbers are computable; since there is no known mechanism restricting physical constants to computable numbers, it seems eminently possible that the values taken by physical constants such as the gravitational constant are not computable.
b.) It’s also not really clear what it means to say the laws of physics are simple. Simplicity is a somewhat complicated concept. We typically talk about simplicity in terms of Occam’s razor and/or various mathematical descriptions of it such as message length and Kolmorogov complexity. We typically say that complexity is related to how long it takes to explain something, but the length of an explanation depends strongly on the language used for that explanation. While the mathematics that we’ve developed can be used to state physical laws relatively concisely, that doesn’t tell us very much about the complexity of the laws of physics, since mathematics was often created for just that purpose. Even assuming that all of physics can be concisely described by the language of mathematics, I’m not sure that mathematics itself is “simple”.
c.) Simple laws don’t necessarily lead to simple results. If I have a set of 3 objects interacting with each other via a 1/r^2 force like gravity there is no general closed form solution for the positions of those objects at some time t in the future. I can simulate their behavior numerically, but numerical simulations are often computationally expensive, the
numeric results may depend on the initial conditions in unpredictable ways, and small deviations in the initial set up or rounding errors early in the simulation may result in wildly different outcomes. This difficulty strongly affects our ability to model the chemical properties of atoms. Since each electron orbiting the nucleus interacts with each other electron via the coulomb force, there is currently no way to exactly describe the behavior of the electrons even for a single isolated many-electron atom.
d.) A simple set of equations is insufficient to specify a physical system. Most physical laws are concerned with the time evolution of physical systems, and they typically rely on the initial state of the system as a set of input parameters. For many of the systems physics is still trying to understand, it isn’t possible to accurately determining what the correct input parameters are. Because of the potentially strong dependence on the initial conditions outlined in c.), it’s difficult to know whether a negative result for a given set of equations/parameters implies needing a new set of laws, or just slightly different initial conditions.
In short, your proposal is difficult to enact for similar reasons that Solomonoff induction is difficult. In general there is a vast hypothesis space that varies over both a potentially infinite set of equations and a large number of initial conditions. The computational cost of evaluating a given hypothesis is unknown and potentially very expensive. It has the added difficulty that even given an infinite set of initial hypotheses, the correct hypothesis may not be among them.
I really like this topic, and I’m really glad you brought it up; it probably even deserves its own post.
There are definitely some people who are trying this, or similar approaches. I’m pretty sure it’s one of the end goals of Stephen Wolfram’s “New Kind of Science” and the idea of high-throughput searching of data for latent mathematical structure is definitely in vogue in several sub-branches of physics.
With that being said, while the idea has caught people’s interest, it’s far from obvious that it will work. There are a number of difficulties and open questions, both with the general method and the specific instance you outline.
It’s not clear that this is a good assumption, and it’s not totally clear what exactly it means. There are a couple of difficulties:
a.) We know that the universe exhibits regular structure on some length and time scales, but that’s almost certainly a necessary condition for the evolution of complex life, and the anthropic principle makes that very weak evidence that the universe exhibits similar regular structure on all length/time/energy scales. While clever arguments based on the anthropic principle are typically profoundly unsatisfying, the larger point is that we don’t know that the universe is entirely regular/mathematical/computable and it’s not clear that we have strong evidence to believe it is. As an example, we know that a vanishingly small percentage of real numbers are computable; since there is no known mechanism restricting physical constants to computable numbers, it seems eminently possible that the values taken by physical constants such as the gravitational constant are not computable.
b.) It’s also not really clear what it means to say the laws of physics are simple. Simplicity is a somewhat complicated concept. We typically talk about simplicity in terms of Occam’s razor and/or various mathematical descriptions of it such as message length and Kolmorogov complexity. We typically say that complexity is related to how long it takes to explain something, but the length of an explanation depends strongly on the language used for that explanation. While the mathematics that we’ve developed can be used to state physical laws relatively concisely, that doesn’t tell us very much about the complexity of the laws of physics, since mathematics was often created for just that purpose. Even assuming that all of physics can be concisely described by the language of mathematics, I’m not sure that mathematics itself is “simple”.
c.) Simple laws don’t necessarily lead to simple results. If I have a set of 3 objects interacting with each other via a 1/r^2 force like gravity there is no general closed form solution for the positions of those objects at some time t in the future. I can simulate their behavior numerically, but numerical simulations are often computationally expensive, the numeric results may depend on the initial conditions in unpredictable ways, and small deviations in the initial set up or rounding errors early in the simulation may result in wildly different outcomes. This difficulty strongly affects our ability to model the chemical properties of atoms. Since each electron orbiting the nucleus interacts with each other electron via the coulomb force, there is currently no way to exactly describe the behavior of the electrons even for a single isolated many-electron atom.
d.) A simple set of equations is insufficient to specify a physical system. Most physical laws are concerned with the time evolution of physical systems, and they typically rely on the initial state of the system as a set of input parameters. For many of the systems physics is still trying to understand, it isn’t possible to accurately determining what the correct input parameters are. Because of the potentially strong dependence on the initial conditions outlined in c.), it’s difficult to know whether a negative result for a given set of equations/parameters implies needing a new set of laws, or just slightly different initial conditions.
In short, your proposal is difficult to enact for similar reasons that Solomonoff induction is difficult. In general there is a vast hypothesis space that varies over both a potentially infinite set of equations and a large number of initial conditions. The computational cost of evaluating a given hypothesis is unknown and potentially very expensive. It has the added difficulty that even given an infinite set of initial hypotheses, the correct hypothesis may not be among them.