The universe realio trulio starts to look like a 4-dimensional object, and reference frames are just slices of this 4-dimensional object.
But the quantum wavefunction isn’t a four-dimensional object. If we’re doing field theory, it’s an object in an infinite-dimensional space. The four-dimensionality of field theory resides in the operators, not the wavefunctions. So if I say that the observables corresponding to operators are what’s real, I can think relativistically about space and time, because everything that’s real is always anchored to a specific point in space-time, and the notion of a point doesn’t involve absolute simultaneity. But if I say that the wavefunctions are real, then I have to say that the spacelike hypersurfaces with which they are associated are also real.
If you want your probabilities to be frequencies
What else can they be, in a Many Worlds theory? The whole meaning of Many Worlds is that this is one world among many. There are other worlds and things happen differently there. So if we do the math and add up the frequencies for physical events across all the worlds, we had better find out that ours is a typical sort of world.
Unfortunately, a lot of people who talk about Many Worlds never even think things through this far. They just think “unitary evolution produces decoherence, decoherence diagonalizes a particular basis, observable reality is one of those basis states, therefore the wavefunction of the universe contains observable reality and I don’t need to say any more”. In particular, Many Worlds advocates tend to surreptitiously rely on the Born rule in order to explain the observed frequencies of events. Without something like a Born rule, a partial density matrix is just a mathematical object. If you inspect it, you will not see multiple copies of anything. Instead, you will see an array of numbers. It’s just like the parable of the car dealer. If I am to deliver on your order for 9 BMWs, I have to hand over nine cars, not one car with a number painted on it. Many Worlds fails to deliver on its promise for exactly the same reason.
But the quantum wavefunction isn’t a four-dimensional object.
The four-dimensionality of field theory resides in the operators, not the wavefunctions.
But if I say that the wavefunctions are real, then I have to say that the spacelike hypersurfaces with which they are associated are also real.
I don’t see why any of this is true. You’ll have to unpack more and make it easier to understand, maybe.
If you want your probabilities to be frequencies
What else can they be, in a Many Worlds theory? The whole meaning of Many Worlds is that this is one world among many.
Whoa whoa whoa. No. You should not be putting this much effort if you don’t agree that the “worlds” are a convenient but misleading way to describe it.
I don’t see why any of this is true. You’ll have to unpack more
OK:
the quantum wavefunction isn’t a four-dimensional object.
What is a four-dimensional object? It’s an object which lives in four dimensions. What does that mean? It means its parts can be located in four-dimensional space. If that’s Minkowski space, then we can look at the object from the perspective of various relativity-compliant reference frames.
Now what is a wavefunction? It can only be regarded as four-dimensional in this sense if it’s the wavefunction of a single particle. Once you talk about wavefunctions for multiple particles, or wavefunctionals for quantum fields, it doesn’t have localizable parts. Its constituent amplitudes are at best “multilocal”, e.g. you have amplitudes for a set of n mutually spacelike points.
The four-dimensionality of field theory resides in the operators, not the wavefunctions.
The field operators are indexed by space-time coordinates—they have the form psi(x), where psi is the field, x is a spatial or a space-time position, and psi(x) is an operator which can be applied to a wavefunctional of amplitudes for field configurations. So the operators for a field are four-dimensional (in four-dimensional quantum field theory) because there is a four-dimensional manifold of them. This is because the space-time value of the field is the corresponding observable and the field potentially has a value anywhere in four-dimensional space-time.
But if I say that the wavefunctions are real, then I have to say that the spacelike hypersurfaces with which they are associated are also real.
Wavefunctions, and this especially clear for multiparticle configurations and for fields, are superpositions of configurations defined on some spacelike hypersurface. The hypersurface is part of the definition of the wavefunction, one of the conceptually essential ingredients. So if the wavefunction is real, so is the hypersurface on which it is defined.
You should not be putting this much effort if you don’t agree that the “worlds” are a convenient but misleading way to describe it.
I refer to my dilemma for Many Worlds advocates, as quoted by orthonormal. If you cannot say what the worlds are, then you don’t have a theory. You may think you have a theory, but you don’t, because the worlds (branches, configurations, something) are supposed to be the point of contact between the actually-real wavefunction and observable reality.
One reason I am so strident on this topic is that belief in Many Worlds often seems to be based on half-examined notions that don’t even make sense when you manage to force them into words. The typical non-physicist’s idea of Many Worlds is that it involves many worlds, parallel universes just like in science fiction. The typical physicist’s idea of Many Worlds is more like “wavefunction collapse does not occur”; it’s a “no-collapse” interpretation. But this is the interpretation which is nonsense unless you force it into the mold of the “naive” Many Worlds interpretation, at which point it becomes susceptible to critique and falsification.
The no-collapse interpretation survives in physicists’ minds because of two things: first, Copenhagen tells us that we can get empirically accurate results from wavefunctions; second, doublethink about the meaning of decoherence. No-collapse advocates want to say that decoherence explains how to see observed reality, deep inside the wavefunction, but they won’t take this approach seriously enough to say that the components, aspects, or substructures of the wavefunction that they are pointing to, are really real—because that would be like having a preferred basis. This attitude insulates them from having to address the problems with relativity and the Born rule that people who do talk about worlds face. That’s why I call it doublethink.
If reality is to be found inside the wavefunction by decomposing the local density matrix in the most diagonal basis, then you’re saying that observable reality is one of those local basis states, and you are accountable for explaining why the square of its coefficient corresponds to the frequency with which the corresponding events are actually seen to happen.
Hm, stuff does seem to be more complicated than I’d thought.
Still, saying “and so, collapse happens” doesn’t sit well with me at all, for much-better-understood nonrelativistic QM reasons. Say we’re on opposite sides of a reasonably information-proof wall, and I measure a 2-state system. This is an identical problem to Schrodinger’s cat measuring the decay of an atom—I go into a macroscopic superposition. If you had a 2-state system that was entangled with my 2-state system, you could do a Bell inequality measurement on a signal that I send—even if I send the message manually—and it would show that I really am in this quantum state. On the other hand, from my perspective, when I measure a system I get an answer. So from your perspective I’m in an entangled state, and from my perspective I’ve measured a definite value. How would collapse replicate this sort of subjectivity?
Yet another reason why Copenhagen interpretation, in its true form, does not reify the wavefunction. “Collapse” is just like the update of a prior in the light of new knowledge; you throw away the parts of a probability distribution which are now knowably not relevant. According to Copenhagen, it is the observables that are real, and the wavefunctions are just tabulations of incomplete knowledge. The Copenhagen interpretation only leads you astray if you try to defend the idea that QM according to Copenhagen is a complete theory. But if you’re happy with the idea that QM is incomplete (and thus not the final word in physics), then Copenhagen is your guide. The problem of collapsing wavefunctions is entirely an artefact of belief in wavefunctions. The real problem is simply to explain what’s behind the success of QM, and wavefunction realism is just one possible approach.
It is not my favorite, but an approach which should at least be easy to understand is the “zigzag in time” interpretation, which says that spacelike correlations are due to microscopic time loops. Physics is local, but there are inflection points where forward-in-time causality turns into backwards-in-time causality, and the actual causal web of the universe therefore involves nonlocal-looking regularities. On this view, quantum mechanics is the statistical mechanics of a physics with causal chains running forward and backward in time, and such a physics becomes possible with general relativity.
The first part of this idea—causes operating in both directions of time—is almost as old as quantum mechanics. It’s in the Wheeler-Feynman absorber theory, the transactional interpretation of John Cramer, Yakir Aharonov’s time-symmetric quantum mechanics, and the work of Huw Price, among others; but I prefer the relatively obscure work of Mark Hadley, because he gives it the clearest foundation: the “inflection” in which the time direction of a causal chain reverses, as arising from a non-time-orientable patch in the space-time 4-manifold.
If the microscopic topology of space-time admits such regions, then not only is its evolution in time non-deterministic, but it will be non-deterministic in a complexly correlated way: causal loops in the far future topology constrain what happens on a spacelike hypersurface in the present, in a way that looks highly nonlocal. One manifestation of this would be nonlocally correlated perturbations to the passage of a particle or a wave through space, perturbations correlated not just with each other but also with distant distributions of matter; thus, the effects seen in the double-slit experiment, and all the other standard quantum phenomena.
If this approach worked, it would be very elegant, because it would turn out that quantum mechanics is a microscopic side effect of general relativity. It would require the matter fields to exhibit microscopic violations of the energy conditions which normally prevent wormholes and time machines, but this is not impossible, there are many simple models in which the energy conditions are violated. The challenge would be to show (1) a combination of fields which exhibits those violations and reduces to the standard model (2) that the rules of quantum probability actually do follow from the existence of microscopic time loops. Hadley has an argument that the nondistributive logic of quantum propositions also characterizes the nonlocal constraints arising from time loops, and that this in turn implies the rest of the quantum formalism (e.g. the use of Hilbert space and noncommutative operators for observables); but I believe he needs to actually exhibit some simple solutions to general relativity containing time loops, and show how to obtain the Schrodinger equation from the application of probability theory to such a class of simple solutions, before his argument can be taken seriously.
If this approach worked, it would be very elegant, because it would turn out that quantum mechanics is a microscopic side effect of general relativity. It would require the matter fields to exhibit microscopic violations of the energy conditions which normally prevent wormholes and time machines, but this is not impossible, there are many simple models in which the energy conditions are violated.
Energy conditions (well, the topological censorship, really) in classical GR prevent only traversable wormholes, and only in 3+1 dimensions. Non-simply connected spacetimes are otherwise allowed in a covariant formulation of GR, though they do not arise in an initial value problem with a simply connected spacelike initial surface.
Additionally, changing one’s past is absolutely incompatible with GR, as there is a unique metric tensor associated with each spacetime point, not two or more different ones, one for each go through a closed timelike curve. The only way time travel can happen in GR is by unwrapping these time loops into some universal cover. And there is a heavy price to pay for that, but that discussion is straying too far afield, so feel free to PM me if you want to talk further.
But the quantum wavefunction isn’t a four-dimensional object. If we’re doing field theory, it’s an object in an infinite-dimensional space. The four-dimensionality of field theory resides in the operators, not the wavefunctions. So if I say that the observables corresponding to operators are what’s real, I can think relativistically about space and time, because everything that’s real is always anchored to a specific point in space-time, and the notion of a point doesn’t involve absolute simultaneity. But if I say that the wavefunctions are real, then I have to say that the spacelike hypersurfaces with which they are associated are also real.
What else can they be, in a Many Worlds theory? The whole meaning of Many Worlds is that this is one world among many. There are other worlds and things happen differently there. So if we do the math and add up the frequencies for physical events across all the worlds, we had better find out that ours is a typical sort of world.
Unfortunately, a lot of people who talk about Many Worlds never even think things through this far. They just think “unitary evolution produces decoherence, decoherence diagonalizes a particular basis, observable reality is one of those basis states, therefore the wavefunction of the universe contains observable reality and I don’t need to say any more”. In particular, Many Worlds advocates tend to surreptitiously rely on the Born rule in order to explain the observed frequencies of events. Without something like a Born rule, a partial density matrix is just a mathematical object. If you inspect it, you will not see multiple copies of anything. Instead, you will see an array of numbers. It’s just like the parable of the car dealer. If I am to deliver on your order for 9 BMWs, I have to hand over nine cars, not one car with a number painted on it. Many Worlds fails to deliver on its promise for exactly the same reason.
I don’t see why any of this is true. You’ll have to unpack more and make it easier to understand, maybe.
Whoa whoa whoa. No. You should not be putting this much effort if you don’t agree that the “worlds” are a convenient but misleading way to describe it.
OK:
What is a four-dimensional object? It’s an object which lives in four dimensions. What does that mean? It means its parts can be located in four-dimensional space. If that’s Minkowski space, then we can look at the object from the perspective of various relativity-compliant reference frames.
Now what is a wavefunction? It can only be regarded as four-dimensional in this sense if it’s the wavefunction of a single particle. Once you talk about wavefunctions for multiple particles, or wavefunctionals for quantum fields, it doesn’t have localizable parts. Its constituent amplitudes are at best “multilocal”, e.g. you have amplitudes for a set of n mutually spacelike points.
The field operators are indexed by space-time coordinates—they have the form psi(x), where psi is the field, x is a spatial or a space-time position, and psi(x) is an operator which can be applied to a wavefunctional of amplitudes for field configurations. So the operators for a field are four-dimensional (in four-dimensional quantum field theory) because there is a four-dimensional manifold of them. This is because the space-time value of the field is the corresponding observable and the field potentially has a value anywhere in four-dimensional space-time.
Wavefunctions, and this especially clear for multiparticle configurations and for fields, are superpositions of configurations defined on some spacelike hypersurface. The hypersurface is part of the definition of the wavefunction, one of the conceptually essential ingredients. So if the wavefunction is real, so is the hypersurface on which it is defined.
I refer to my dilemma for Many Worlds advocates, as quoted by orthonormal. If you cannot say what the worlds are, then you don’t have a theory. You may think you have a theory, but you don’t, because the worlds (branches, configurations, something) are supposed to be the point of contact between the actually-real wavefunction and observable reality.
One reason I am so strident on this topic is that belief in Many Worlds often seems to be based on half-examined notions that don’t even make sense when you manage to force them into words. The typical non-physicist’s idea of Many Worlds is that it involves many worlds, parallel universes just like in science fiction. The typical physicist’s idea of Many Worlds is more like “wavefunction collapse does not occur”; it’s a “no-collapse” interpretation. But this is the interpretation which is nonsense unless you force it into the mold of the “naive” Many Worlds interpretation, at which point it becomes susceptible to critique and falsification.
The no-collapse interpretation survives in physicists’ minds because of two things: first, Copenhagen tells us that we can get empirically accurate results from wavefunctions; second, doublethink about the meaning of decoherence. No-collapse advocates want to say that decoherence explains how to see observed reality, deep inside the wavefunction, but they won’t take this approach seriously enough to say that the components, aspects, or substructures of the wavefunction that they are pointing to, are really real—because that would be like having a preferred basis. This attitude insulates them from having to address the problems with relativity and the Born rule that people who do talk about worlds face. That’s why I call it doublethink.
If reality is to be found inside the wavefunction by decomposing the local density matrix in the most diagonal basis, then you’re saying that observable reality is one of those local basis states, and you are accountable for explaining why the square of its coefficient corresponds to the frequency with which the corresponding events are actually seen to happen.
Hm, stuff does seem to be more complicated than I’d thought.
Still, saying “and so, collapse happens” doesn’t sit well with me at all, for much-better-understood nonrelativistic QM reasons. Say we’re on opposite sides of a reasonably information-proof wall, and I measure a 2-state system. This is an identical problem to Schrodinger’s cat measuring the decay of an atom—I go into a macroscopic superposition. If you had a 2-state system that was entangled with my 2-state system, you could do a Bell inequality measurement on a signal that I send—even if I send the message manually—and it would show that I really am in this quantum state. On the other hand, from my perspective, when I measure a system I get an answer. So from your perspective I’m in an entangled state, and from my perspective I’ve measured a definite value. How would collapse replicate this sort of subjectivity?
Yet another reason why Copenhagen interpretation, in its true form, does not reify the wavefunction. “Collapse” is just like the update of a prior in the light of new knowledge; you throw away the parts of a probability distribution which are now knowably not relevant. According to Copenhagen, it is the observables that are real, and the wavefunctions are just tabulations of incomplete knowledge. The Copenhagen interpretation only leads you astray if you try to defend the idea that QM according to Copenhagen is a complete theory. But if you’re happy with the idea that QM is incomplete (and thus not the final word in physics), then Copenhagen is your guide. The problem of collapsing wavefunctions is entirely an artefact of belief in wavefunctions. The real problem is simply to explain what’s behind the success of QM, and wavefunction realism is just one possible approach.
Okay. So how would your favored interpretation handle that sort of subjectivity?
It is not my favorite, but an approach which should at least be easy to understand is the “zigzag in time” interpretation, which says that spacelike correlations are due to microscopic time loops. Physics is local, but there are inflection points where forward-in-time causality turns into backwards-in-time causality, and the actual causal web of the universe therefore involves nonlocal-looking regularities. On this view, quantum mechanics is the statistical mechanics of a physics with causal chains running forward and backward in time, and such a physics becomes possible with general relativity.
The first part of this idea—causes operating in both directions of time—is almost as old as quantum mechanics. It’s in the Wheeler-Feynman absorber theory, the transactional interpretation of John Cramer, Yakir Aharonov’s time-symmetric quantum mechanics, and the work of Huw Price, among others; but I prefer the relatively obscure work of Mark Hadley, because he gives it the clearest foundation: the “inflection” in which the time direction of a causal chain reverses, as arising from a non-time-orientable patch in the space-time 4-manifold.
If the microscopic topology of space-time admits such regions, then not only is its evolution in time non-deterministic, but it will be non-deterministic in a complexly correlated way: causal loops in the far future topology constrain what happens on a spacelike hypersurface in the present, in a way that looks highly nonlocal. One manifestation of this would be nonlocally correlated perturbations to the passage of a particle or a wave through space, perturbations correlated not just with each other but also with distant distributions of matter; thus, the effects seen in the double-slit experiment, and all the other standard quantum phenomena.
If this approach worked, it would be very elegant, because it would turn out that quantum mechanics is a microscopic side effect of general relativity. It would require the matter fields to exhibit microscopic violations of the energy conditions which normally prevent wormholes and time machines, but this is not impossible, there are many simple models in which the energy conditions are violated. The challenge would be to show (1) a combination of fields which exhibits those violations and reduces to the standard model (2) that the rules of quantum probability actually do follow from the existence of microscopic time loops. Hadley has an argument that the nondistributive logic of quantum propositions also characterizes the nonlocal constraints arising from time loops, and that this in turn implies the rest of the quantum formalism (e.g. the use of Hilbert space and noncommutative operators for observables); but I believe he needs to actually exhibit some simple solutions to general relativity containing time loops, and show how to obtain the Schrodinger equation from the application of probability theory to such a class of simple solutions, before his argument can be taken seriously.
Energy conditions (well, the topological censorship, really) in classical GR prevent only traversable wormholes, and only in 3+1 dimensions. Non-simply connected spacetimes are otherwise allowed in a covariant formulation of GR, though they do not arise in an initial value problem with a simply connected spacelike initial surface.
Additionally, changing one’s past is absolutely incompatible with GR, as there is a unique metric tensor associated with each spacetime point, not two or more different ones, one for each go through a closed timelike curve. The only way time travel can happen in GR is by unwrapping these time loops into some universal cover. And there is a heavy price to pay for that, but that discussion is straying too far afield, so feel free to PM me if you want to talk further.