Suppose, counterfactually, that Many Worlds QM and Collapse QM really always made the same predictions, and so you want to say they are both the same theory QM. It still makes sense to ask what is the complexity of Many Worlds QM and how much probability does it contribute to QM, and what is the complexity of Collapse QM and how much probability does it contribute to QM. It even makes sense to say that Many Worlds QM has a strictly smaller complexity, and contributes more probability, and is the better formulation.
It still makes sense to ask what is the complexity of Many Worlds QM and how much probability does it contribute to QM, and what is the complexity of Collapse QM and how much probability does it contribute to QM.
You can of course introduce the universal prior over equivalent formulations of a given theory, and state which formulations weigh how much according to this prior, but I don’t see in what way this is a natural structure to consider, and what questions it allows to understand better.
It seems you want to define the complexity of QM by summing over all algorithms that can generate the predictions of QM, rather than just taking the shortest one. In that case you should probably take the same approach to defining K-complexity of bit strings: sum over all algorithms that print the string, not take the shortest one. Do you subscribe to that point of view?
It seems you want to define the complexity of QM by summing over all algorithms that can generate the predictions of QM, rather than just taking the shortest one.
Yes, though to be clear, it is the prior probability associated with the complexity of the individual algorithm that I would sum over to get the prior probability of that common set of predictions being correct. I don’t consider the common set of predictions to have a conceptially useful complexity in the same sense that the algorithms do.
In that case you should probably take the same approach to defining K-complexity of bit strings: sum over all algorithms that print the string, not take the shortest one. Do you subscribe to that point of view?
I would apply the same approach to making predictions about bit strings.
Suppose, counterfactually, that Many Worlds QM and Collapse QM really always made the same predictions, and so you want to say they are both the same theory QM. It still makes sense to ask what is the complexity of Many Worlds QM and how much probability does it contribute to QM, and what is the complexity of Collapse QM and how much probability does it contribute to QM. It even makes sense to say that Many Worlds QM has a strictly smaller complexity, and contributes more probability, and is the better formulation.
You can of course introduce the universal prior over equivalent formulations of a given theory, and state which formulations weigh how much according to this prior, but I don’t see in what way this is a natural structure to consider, and what questions it allows to understand better.
It seems you want to define the complexity of QM by summing over all algorithms that can generate the predictions of QM, rather than just taking the shortest one. In that case you should probably take the same approach to defining K-complexity of bit strings: sum over all algorithms that print the string, not take the shortest one. Do you subscribe to that point of view?
Yes, though to be clear, it is the prior probability associated with the complexity of the individual algorithm that I would sum over to get the prior probability of that common set of predictions being correct. I don’t consider the common set of predictions to have a conceptially useful complexity in the same sense that the algorithms do.
I would apply the same approach to making predictions about bit strings.
Why? Both are bit strings, no?
My computer represents numbers and letters as bit strings. This doesn’t mean it makes sense to multiply letters together.
This is related to a point that I attempted to make previously. You can measure complexity, but you must pick the context appropriately.