It wasn’t my purpose to open a discussion of interpretation of quantum mechanics. I only took this as an example.
My point is something else entirely: scientists have been leaning very heavily on William of Occam for a long while now. But try to pin down what they mean by a the relative complexity of an explanation, and they shrug their shoulders.
It’s not even the case that scientists disagree on which metric to apply. (That would just be normal business!) But, as far as I know, no one has made a serious effort to define a metric. Maybe because they can’t?
Yes, and the sequence (as well as the post I linked below) tries to define a complexity measure based on Solomonoff Induction, which is a formalization of Occam’s Razor.
I have the impression that Solomonoff Induction provides a precise procedure to a very narrow set of problems with little practical applicability elsewhere.
How would you use Solomonoff Induction to choose between the two alternative theories mentioned in the article: one based on Newton’s Force Laws, the other based on the principle of least action. (Both theories have the same range of validity and produce the identical results).
But it isn’t very successful, because if you cast SI on terms of a linear string of bits, as is standard, you are building in a kind of single universe assumption.
if you cast SI on terms of a linear string of bits, as is standard, you are building in a kind of single universe assumption.
First, I assume you mean a sequential string of bits. “Linear” has a well defined meaning in math that doesn’t make sense in the context you used it.
Second, can you explain what you mean by that? It doesn’t sound correct. I mean, an agent can only make predictions about its observable universe, but that’s true of humans too. We can speculate about multiverses and how they may shape our observations (e.g. the many worlds interpretation of QFT), but so could an SI agent.
I think you’re example of interpreting quantum mechanics gets pretty close to the heart of the matter. It’s one thing to point at solomonoff induction and say, “there’s your formalization”. It’s quite another to understand how Occam’s Razor is used in practice.
Nobody actually tries to convert the Standard Model to the shortest possible computer program, count the bits, and compare it to the shortest possible computer program for string theory or whatever.
What you’ll find, however; is that some theories amount to other theories but with an extra postulate or two (e.g. many worlds vs. Copenhagen). So they are strictly more complex. If it doesn’t explain more than the simpler theory the extra complexity isn’t justified.
A lot of the progression of science over the last few centuries has been toward unifying diverse theories under less complex, general frameworks. Special relativity helped unify theories about the electric and magnetic forces, which were then unified with the weak nuclear force and eventually the strong nuclear force. A lot of that work has helped explain the composition of the periodic table and the underlying mechanisms to chemistry. In other words, where there used to be many separate theories, there are now only two theories that explain almost every phenomenon in the observable universe. Those two theories are based on surprisingly few and surprisingly simple postulates.
Over the 20th century, the trend was towards reducing postulates and explaining more, so it was pretty clear that Occam’s razor was being followed. Since then, we’ve run into a bit of an impasse with GR and QFT not nicely unifying and discoveries like dark energy and dark matter.
hi habryka,
It wasn’t my purpose to open a discussion of interpretation of quantum mechanics. I only took this as an example.
My point is something else entirely: scientists have been leaning very heavily on William of Occam for a long while now. But try to pin down what they mean by a the relative complexity of an explanation, and they shrug their shoulders.
It’s not even the case that scientists disagree on which metric to apply. (That would just be normal business!) But, as far as I know, no one has made a serious effort to define a metric. Maybe because they can’t?
A very unscientific behaviour indeed!
Yes, and the sequence (as well as the post I linked below) tries to define a complexity measure based on Solomonoff Induction, which is a formalization of Occam’s Razor.
I have the impression that Solomonoff Induction provides a precise procedure to a very narrow set of problems with little practical applicability elsewhere.
How would you use Solomonoff Induction to choose between the two alternative theories mentioned in the article: one based on Newton’s Force Laws, the other based on the principle of least action. (Both theories have the same range of validity and produce the identical results).
But it isn’t very successful, because if you cast SI on terms of a linear string of bits, as is standard, you are building in a kind of single universe assumption.
First, I assume you mean a sequential string of bits. “Linear” has a well defined meaning in math that doesn’t make sense in the context you used it.
Second, can you explain what you mean by that? It doesn’t sound correct. I mean, an agent can only make predictions about its observable universe, but that’s true of humans too. We can speculate about multiverses and how they may shape our observations (e.g. the many worlds interpretation of QFT), but so could an SI agent.
I think you’re example of interpreting quantum mechanics gets pretty close to the heart of the matter. It’s one thing to point at solomonoff induction and say, “there’s your formalization”. It’s quite another to understand how Occam’s Razor is used in practice.
Nobody actually tries to convert the Standard Model to the shortest possible computer program, count the bits, and compare it to the shortest possible computer program for string theory or whatever.
What you’ll find, however; is that some theories amount to other theories but with an extra postulate or two (e.g. many worlds vs. Copenhagen). So they are strictly more complex. If it doesn’t explain more than the simpler theory the extra complexity isn’t justified.
A lot of the progression of science over the last few centuries has been toward unifying diverse theories under less complex, general frameworks. Special relativity helped unify theories about the electric and magnetic forces, which were then unified with the weak nuclear force and eventually the strong nuclear force. A lot of that work has helped explain the composition of the periodic table and the underlying mechanisms to chemistry. In other words, where there used to be many separate theories, there are now only two theories that explain almost every phenomenon in the observable universe. Those two theories are based on surprisingly few and surprisingly simple postulates.
Over the 20th century, the trend was towards reducing postulates and explaining more, so it was pretty clear that Occam’s razor was being followed. Since then, we’ve run into a bit of an impasse with GR and QFT not nicely unifying and discoveries like dark energy and dark matter.