constant—well, then, it is shaping up as follows: We need some concept of world. We can try to be exact about it, and run into various problems, as I have suggested above. Or we can be determinedly vague about it—e.g. saying that a world is a roughly decoherent blob of amplitude—and run into other problems. And then on top of this we can’t even recuperate the quantitative side of quantum mechanics.
There is a form of many-worlds that gives you the correct probabilities back. It’s called consistent histories or decoherent histories. But it has two defining features. First of all, the histories in question are “coarse-grained”. For example, if your basic theory was a field theory, in one of these consistent histories, you don’t specify a value for every field at every space-time point, just a scattering of them. Second, each consistent history has a global probability associated with it—not a probability amplitude, just an ordinary probability. Within this framework, if you want to calculate a transition probability—the odds of B given A—first you consider only those histories in which A occurs, and then you compute Pr(B|A) by using those apriori global probabilities.
Those global probabilities don’t come from nowhere. The basic mathematical entity in consistent histories is an object called the decoherence functional (which can be derived from a familiar-to-physicists postulate like an action or a Hamiltonian), which takes as its input two of these coarse-grained histories. The decoherence functional defines a consistency condition for the coarse-grained histories; a set of them is “consistent” if they are all pairwise decoherent according to the decoherence functional. You then get that apriori global probability for the individual history by using it for both inputs (in effect, calculating its self-decoherence, though I don’t see what that could mean). The whole thing is reminiscent of a diagonalized density matrix, and if I understood it better I’m sure I could make more of that similarity.
Anyway, technical details aside, the important point is that there is a form of many-worlds thinking in which we do get the Born probabilities back, by conditioning on a universal prior computed from the decoherence functional. If we try this out as a picture of reality, we now have to make sense of the probabilities associated with the histories. Two possibilities suggest themselves to me (I will neglect subjectivist interpretations of those probabilities): (a) there’s only one world, and a world-probability is the primordial probability that that was to be the world which became actual; (b) all the worlds exist, in multiple copies, and the probabilities describe their relative multiplicities. They’re both a little odd, but I think either is preferable to the whole “dare to be vague” line of argument.
constant—well, then, it is shaping up as follows: We need some concept of world. We can try to be exact about it, and run into various problems, as I have suggested above. Or we can be determinedly vague about it—e.g. saying that a world is a roughly decoherent blob of amplitude—and run into other problems. And then on top of this we can’t even recuperate the quantitative side of quantum mechanics.
There is a form of many-worlds that gives you the correct probabilities back. It’s called consistent histories or decoherent histories. But it has two defining features. First of all, the histories in question are “coarse-grained”. For example, if your basic theory was a field theory, in one of these consistent histories, you don’t specify a value for every field at every space-time point, just a scattering of them. Second, each consistent history has a global probability associated with it—not a probability amplitude, just an ordinary probability. Within this framework, if you want to calculate a transition probability—the odds of B given A—first you consider only those histories in which A occurs, and then you compute Pr(B|A) by using those apriori global probabilities.
Those global probabilities don’t come from nowhere. The basic mathematical entity in consistent histories is an object called the decoherence functional (which can be derived from a familiar-to-physicists postulate like an action or a Hamiltonian), which takes as its input two of these coarse-grained histories. The decoherence functional defines a consistency condition for the coarse-grained histories; a set of them is “consistent” if they are all pairwise decoherent according to the decoherence functional. You then get that apriori global probability for the individual history by using it for both inputs (in effect, calculating its self-decoherence, though I don’t see what that could mean). The whole thing is reminiscent of a diagonalized density matrix, and if I understood it better I’m sure I could make more of that similarity.
Anyway, technical details aside, the important point is that there is a form of many-worlds thinking in which we do get the Born probabilities back, by conditioning on a universal prior computed from the decoherence functional. If we try this out as a picture of reality, we now have to make sense of the probabilities associated with the histories. Two possibilities suggest themselves to me (I will neglect subjectivist interpretations of those probabilities): (a) there’s only one world, and a world-probability is the primordial probability that that was to be the world which became actual; (b) all the worlds exist, in multiple copies, and the probabilities describe their relative multiplicities. They’re both a little odd, but I think either is preferable to the whole “dare to be vague” line of argument.