Well, what precisely is meant to me excluded when interceptions of non extendible futures are the only ones that matter? (ugh… I just had the thought that this conversation might summon the spambots :)) anyways, why only the non extendible paths?
As far as the rest, I want to read through and understand the paper before I comment on it. I came to a halt right early on due to being confused about extendibility.
However, my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
The big picture is that we are constructing something like special relativity’s notion of time, for any partially ordered set. For any p and q in the set, either p comes before q, p comes after q, or p and q have no order relation. That last possibility is the analogue of spacelike separation. If you then have p, q, r, s..., all of which are pairwise spacelike, you have something resembling a spacelike slice. But you want it to be maximal for the analogy to be complete. The bit about intersecting all inextendible paths is one form of maximality—it’s saying that every maximal timelike path intersects your spacelike slice.
Then, having learnt to think of a poset as a little space-time, you then associate Hilbert spaces with the elements of the poset, and something like a local Schrodinger evolution with each succession step. But the steps can be one-to-many or many-to-one, which is why they use the broader notion of “completely positive mapping” rather than unitary mapping. You can also define the total Hilbert space on a spacelike slice by taking the tensor product of the little Hilbert spaces, and the evolution from one spacelike slice to a later one by algebraically composing all the individual mappings. All in all, it’s like quantum field theory constructed on a directed graph rather than on a continuous space.
my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
I find it hard to say how naturally it does so. The paper is motivated by the problem that the Wheeler-deWitt equation in quantum cosmology only applies to “globally hyperbolic” spacetimes. It’s an exercise in developing a more general formalism. So it’s not written in order to promote a particular quantum interpretation. It’s written in the standard way—“observables” are what’s real, quantum states are just guides to what the observables will do.
A given history will attach a quantum state to every node in the causal graph. Under the orthodox interpretation, the reality at each node does not consist of the associated quantum state vector, but rather local observables taking specific values. Just to be concrete, since this must sound very abstract, let’s talk in terms of qubits. Suppose we have a QCH with a qubit state at every node. Orthodoxy says that these qubit “states” are not the actual states, the actuality everywhere is just 0 or 1. A many-worlds interpretation would have to say those maximal spacelike tensor products are the real states. But when we evolve that state to the next spacelike slice, it should usually become an unfactorizable superposition. This is in contradiction with the QCH philosophy of specifying a definite qubit state at each node. So it’s as if there’s a collapse assumption built in—only I don’t think it’s a necessary assumption. You should be able to talk about a reduced density matrix at each node instead, and still use the formalism.
For me the ontological significance of QCH is not that it inherently prefers a single-world interpretation, but just that it shows an alternative midway between many worlds and classical spacetime—a causal grid of quasi-local state vectors. But the QCH formalism is still a long way from actually giving us quantum gravity, which was the objective. So it has to be considered unproven work in progress.
Well, what precisely is meant to me excluded when interceptions of non extendible futures are the only ones that matter? (ugh… I just had the thought that this conversation might summon the spambots :)) anyways, why only the non extendible paths?
As far as the rest, I want to read through and understand the paper before I comment on it. I came to a halt right early on due to being confused about extendibility.
However, my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
This earlier paper might also help.
The big picture is that we are constructing something like special relativity’s notion of time, for any partially ordered set. For any p and q in the set, either p comes before q, p comes after q, or p and q have no order relation. That last possibility is the analogue of spacelike separation. If you then have p, q, r, s..., all of which are pairwise spacelike, you have something resembling a spacelike slice. But you want it to be maximal for the analogy to be complete. The bit about intersecting all inextendible paths is one form of maximality—it’s saying that every maximal timelike path intersects your spacelike slice.
Then, having learnt to think of a poset as a little space-time, you then associate Hilbert spaces with the elements of the poset, and something like a local Schrodinger evolution with each succession step. But the steps can be one-to-many or many-to-one, which is why they use the broader notion of “completely positive mapping” rather than unitary mapping. You can also define the total Hilbert space on a spacelike slice by taking the tensor product of the little Hilbert spaces, and the evolution from one spacelike slice to a later one by algebraically composing all the individual mappings. All in all, it’s like quantum field theory constructed on a directed graph rather than on a continuous space.
I find it hard to say how naturally it does so. The paper is motivated by the problem that the Wheeler-deWitt equation in quantum cosmology only applies to “globally hyperbolic” spacetimes. It’s an exercise in developing a more general formalism. So it’s not written in order to promote a particular quantum interpretation. It’s written in the standard way—“observables” are what’s real, quantum states are just guides to what the observables will do.
A given history will attach a quantum state to every node in the causal graph. Under the orthodox interpretation, the reality at each node does not consist of the associated quantum state vector, but rather local observables taking specific values. Just to be concrete, since this must sound very abstract, let’s talk in terms of qubits. Suppose we have a QCH with a qubit state at every node. Orthodoxy says that these qubit “states” are not the actual states, the actuality everywhere is just 0 or 1. A many-worlds interpretation would have to say those maximal spacelike tensor products are the real states. But when we evolve that state to the next spacelike slice, it should usually become an unfactorizable superposition. This is in contradiction with the QCH philosophy of specifying a definite qubit state at each node. So it’s as if there’s a collapse assumption built in—only I don’t think it’s a necessary assumption. You should be able to talk about a reduced density matrix at each node instead, and still use the formalism.
For me the ontological significance of QCH is not that it inherently prefers a single-world interpretation, but just that it shows an alternative midway between many worlds and classical spacetime—a causal grid of quasi-local state vectors. But the QCH formalism is still a long way from actually giving us quantum gravity, which was the objective. So it has to be considered unproven work in progress.