This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx
diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT).
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
I’m a little unclear about what EY’s intended philosophical purposes are in this context
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation.
He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal.
I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see.
So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule?
I agree that obviously what will end up working will depend on what the foundations are :)
I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work.
If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way.
I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it.
Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation. He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal. I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness? After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see. So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule? I agree that obviously what will end up working will depend on what the foundations are :) I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work. If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way. I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it. Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
Gerard ‘t Hooft actually does rely on what people call “superdeterminism”, which I just call “full determinism”, which I think is also a term ’t Hooft likes more. At least that is what his papers indicate. He discuss this some in a article from 2008 in response to Simon Kochen and John Conway’s Free Will Theorem. You might want to read the article: http://www.sciencenews.org/view/generic/id/35391/title/Math_Trek__Do_subatomic_particles_have_free_will%3F After that you might want to head on over to arxiv, ’t Hooft has published a 3 papers the last 6 months on this issue and he seem more and more certain of it. He also adress the objections in some notes in those papers. Link: http://arxiv.org/find/quant-ph/1/au:+Hooft_G/0/1/0/all/0/1