Stephen: consistent histories works by having a set of disjoint, coarse-grained histories—“coarse-grained” meaning that they are underspecified by classical standards—which then obtain a-priori probabilities through the use of a “decoherence functional” (which is where stuff like the Hamiltonian, that actually defines the theory, enters). You then get the transition probabilities of ordinary quantum mechanics by conditioning on those global probabilities of whole histories.
Some people have a neo-Copenhagenist attitude towards consistent histories—i.e., it’s just a formalism—but if you take it seriously as a depiction of an actually existing ensemble of worlds, it’s quite different from the more Parmenidean vision offered here, in which reality is a standing wave in configuration space, and “worlds” (and, therefore, observers) are just fuzzily defined substructures of that standing wave. The worlds in a realist consistent-histories interpretation would be sharply defined and noninteracting.
There is certainly a relation between the two possible versions of Many Worlds, in that you can construct a decoherence functional out of a wavefunction of the universe, and derive the probabilities of the coarse-grained histories from it. In effect, each history correponds to a chunk of configuration space, and the total probability of that history comes from the amplitudes occupying that chunk. (The histories do not need to cover all of configuration space; they only need to be disjoint.) … I really need some terminology here. I’m going to call one type Parmenidean, and the other type Lewisian, after David Lewis, the philosopher who talked about causally disjoint multiple worlds. So: you can get a Lewisian theory of many worlds from a Parmenidean theory by breaking off chunks of the Parmenidean “block multiverse” and saying that those are the worlds. I can imagine a debate between a Parmenidean and a Lewisian, in which a Parmenidean would claim that their approach is superior because they regard all the possible Lewisian decompositions as equally partially real, whereas the Lewisian might argue that their approach is superior because there’s no futzing around about what a “world” is—the worlds are clearly (albeit arbitrarily) defined.
But the really significant thing is that you can get the numerical quantum predictions from the “Lewisian” approach, but you can’t get it from the Parmenidean. Robin Hanson’s mangled worlds formula gets results by starting down the road towards a Lewisian specification of exactly what the worlds are, but he gets the right count in a certain limit without having to exactly specify when one world becomes two (or many). Anyway, the point is not that consistent histories makes different predictions, but that it makes predictions at all.
Stephen: consistent histories works by having a set of disjoint, coarse-grained histories—“coarse-grained” meaning that they are underspecified by classical standards—which then obtain a-priori probabilities through the use of a “decoherence functional” (which is where stuff like the Hamiltonian, that actually defines the theory, enters). You then get the transition probabilities of ordinary quantum mechanics by conditioning on those global probabilities of whole histories.
Some people have a neo-Copenhagenist attitude towards consistent histories—i.e., it’s just a formalism—but if you take it seriously as a depiction of an actually existing ensemble of worlds, it’s quite different from the more Parmenidean vision offered here, in which reality is a standing wave in configuration space, and “worlds” (and, therefore, observers) are just fuzzily defined substructures of that standing wave. The worlds in a realist consistent-histories interpretation would be sharply defined and noninteracting.
There is certainly a relation between the two possible versions of Many Worlds, in that you can construct a decoherence functional out of a wavefunction of the universe, and derive the probabilities of the coarse-grained histories from it. In effect, each history correponds to a chunk of configuration space, and the total probability of that history comes from the amplitudes occupying that chunk. (The histories do not need to cover all of configuration space; they only need to be disjoint.) … I really need some terminology here. I’m going to call one type Parmenidean, and the other type Lewisian, after David Lewis, the philosopher who talked about causally disjoint multiple worlds. So: you can get a Lewisian theory of many worlds from a Parmenidean theory by breaking off chunks of the Parmenidean “block multiverse” and saying that those are the worlds. I can imagine a debate between a Parmenidean and a Lewisian, in which a Parmenidean would claim that their approach is superior because they regard all the possible Lewisian decompositions as equally partially real, whereas the Lewisian might argue that their approach is superior because there’s no futzing around about what a “world” is—the worlds are clearly (albeit arbitrarily) defined.
But the really significant thing is that you can get the numerical quantum predictions from the “Lewisian” approach, but you can’t get it from the Parmenidean. Robin Hanson’s mangled worlds formula gets results by starting down the road towards a Lewisian specification of exactly what the worlds are, but he gets the right count in a certain limit without having to exactly specify when one world becomes two (or many). Anyway, the point is not that consistent histories makes different predictions, but that it makes predictions at all.