Actually, I just want to patch the overall flow of argument in the Sequences.
I think we all agree that some form of Occam’s razor makes sense, and that it can become a quantifiable criterion if you pose it in terms of “how many bits does it take to specify this hypothesis as a predictive algorithm?”
But I don’t agree that Everett versus Collapse is a very good illustration of the razor, because you have to smuggle collapse back into MWI as a calculational ansatz (Born rule) in order to make it predictively useful. So I count this as one example of how the overall structure of the Sequences calls for a particular argument or case study, but the QM Sequence doesn’t really deliver, so a substitute example should be found.
I think we all agree that some form of Occam’s razor makes sense
Yes.
and that it can become a quantifiable criterion if you pose it in terms of “how many bits does it take to specify this hypothesis as a predictive algorithm?”
That’s a much stronger claim:
It’s unclear whether this is the proper way of formalizing Occam’s razor.
Even if it was, it is an uncomputable approach, and, as far as I know, there aren’t even reasonably methods to approximate it in the general case.
It’s uncomputable if you want to compare all possible hypotheses according to this criterion, but not if you just want to distinguish between hypotheses that are already available and fully specified. Also, it doesn’t have to be the way to formalize the razor, to have some validity.
not if you just want to distinguish between hypotheses that are already available and fully specified.
This assumes that you can computably map each hypothesis to a single program (or a set of programs where you can computably identify the shortest element).
For arbitrary hypotheses, this is impossible due to the Rice’s theorem.
If you write your hypotheses as prepositions in a formal language, then by restricting the language you can get decidability at the expense of expressive power. The typical examples are the static type systems of many popular programming languages (though notably not C++).
Even then, you run into complexity issues: for instance, type checking is NP-complete in C#, and I conjecture that the problem of finding the shortest C# program that satisfies some type-checkable property is NP-hard. (the obvious brute-force way of doing it is to enumerate programs in order of increasing length and check them until you find one that passes).
But that just tells you that other branches are conscious too. It doesn’t give you the Born rule. So it is an ansatz, it’s a guess about what formula to use. And until you can derive it from the unitary evolution rule, it counts separately towards the complexity of your model.
If you get as far as allowing that the other branches are conscious—no, simply that you even HAVE branches, and they should be related by probabilities that don’t change retroactively—then you have been granted sufficient grounds to derive the Born Rule.
It’s getting that far that’s the hard part.
EDIT: I’ve provided this derivation already, here
Does that help?
Actually, I just want to patch the overall flow of argument in the Sequences.
I think we all agree that some form of Occam’s razor makes sense, and that it can become a quantifiable criterion if you pose it in terms of “how many bits does it take to specify this hypothesis as a predictive algorithm?”
But I don’t agree that Everett versus Collapse is a very good illustration of the razor, because you have to smuggle collapse back into MWI as a calculational ansatz (Born rule) in order to make it predictively useful. So I count this as one example of how the overall structure of the Sequences calls for a particular argument or case study, but the QM Sequence doesn’t really deliver, so a substitute example should be found.
Yes.
That’s a much stronger claim:
It’s unclear whether this is the proper way of formalizing Occam’s razor.
Even if it was, it is an uncomputable approach, and, as far as I know, there aren’t even reasonably methods to approximate it in the general case.
It’s uncomputable if you want to compare all possible hypotheses according to this criterion, but not if you just want to distinguish between hypotheses that are already available and fully specified. Also, it doesn’t have to be the way to formalize the razor, to have some validity.
This assumes that you can computably map each hypothesis to a single program (or a set of programs where you can computably identify the shortest element).
For arbitrary hypotheses, this is impossible due to the Rice’s theorem.
If you write your hypotheses as prepositions in a formal language, then by restricting the language you can get decidability at the expense of expressive power. The typical examples are the static type systems of many popular programming languages (though notably not C++).
Even then, you run into complexity issues: for instance, type checking is NP-complete in C#, and I conjecture that the problem of finding the shortest C# program that satisfies some type-checkable property is NP-hard. (the obvious brute-force way of doing it is to enumerate programs in order of increasing length and check them until you find one that passes).
You don’t smuggle it in as an ansatz.
You apply the generalized anti-zombie principle. There was a reason he went there first.
But that just tells you that other branches are conscious too. It doesn’t give you the Born rule. So it is an ansatz, it’s a guess about what formula to use. And until you can derive it from the unitary evolution rule, it counts separately towards the complexity of your model.
If you get as far as allowing that the other branches are conscious—no, simply that you even HAVE branches, and they should be related by probabilities that don’t change retroactively—then you have been granted sufficient grounds to derive the Born Rule.
It’s getting that far that’s the hard part.
EDIT: I’ve provided this derivation already, here Does that help?