This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx
diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT).
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
I’m a little unclear about what EY’s intended philosophical purposes are in this context
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation.
He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal.
I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see.
So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule?
I agree that obviously what will end up working will depend on what the foundations are :)
I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work.
If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way.
I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it.
Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
Depends on what you mean by rigorous. (OTOH, it’s not fully compatible with general relativity, so we know it doesn’t exactly describe the world—or that GR doesn’t, or that neither does.)
If you bug physicists enough, they will admit that the standard model has some problems, like the Landau pole. However, there are toy QFTs in 2 spacial dimension that have models rigorous enough for mathematicians. That should be adequate for philosophical purposes.
I don’t think the Landau pole can be characterized as an actual problem. It was considered a problem for strong interactions, but we now know that quantum chronodynamics is asymptotically free, so it does not have a Landau pole. The Landau pole for quantum electrodynamics is at an energy scale much much higher than the Planck energy. We already know that we need new physics at the Planck scale, so the lack of asymptotic freedom in the Standard Model is not a real practical (or even conceptual) problem.
If you don’t like the question I’m answering, complain to Komponisto, not me.
I wasn’t complaining to anyone. And I don’t dislike the question. I was just adding some relevant information. Anyway, I did reply directly to komponisto as well. See the end of my long comment above.
But what would you count as a conceptual problem?
If we did not have independent evidence that QFT breaks down at the Planck scale (since gravity is not renormalizable), I might have considered the Landau pole a conceptual problem for QFT. But since it is only a problem in a domain where we already know QFT doesn’t work, I don’t see it that way.
I don’t think that’s the normal use of “conceptual problem.”
If physicists believe, as their verbiage seems to indicate, that QED is a real theory that is an approximation to reality, and they compute approximations to the numbers in QED, while QED is actually inconsistent, I would say that is an error and a paradigmatic example of a conceptual error.
What does it mean to interpret an inconsistent theory?
There is the standard MWI advocacy that matches Elieser’s views. This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
The first critique seems to criticize something different that Eliezer says. It seems like the person quoted by the author did not express themselves clearly, and the critique takes a wrong explanation. For example this part:
When do worlds split?
The precise moment/location of the split is not sharply defined due to the subjective nature of irreversibility, but can be considered complete when much more than kT of energy has been released in an uncontrolled fashion into the environment. At this stage the event has become irreversible.
How can irreversibility be subjective if it defines what a measurement is and when worlds split? It would imply that when worlds split is also a subjective matter. But then it is observer-dependent, the very thing the interpretation is trying to avoid.
For me the Eliezer’s explanation of “blobs of amplitude” makes sense. There is a set of possible configurations, which at the beginning are all very similar, but because some interactions make the differences grow, the set gradually separates into smaller subsets. When exactly? Well, in theory the parts are connected forever, but the connection only has epsilon size related to the subsets, so it can be ignored. But asking when exactly is like asking “what exactly is the largest number that can be considered ‘almost zero’?”. If you want to be exact, only zero is exactly zero. On the other hand, 1/3^^^3 is for all practical purposes zero. I would feel uncomfrotable picking one number and saying “ok, this X is ‘almost zero’, but 1.000001 X is not ‘almost zero’”.
The quoted person seems to say something similar, just less clearly, which allows the critic to use the word “subjective” and jump to a wrong conclusion that author is saying that mathematics is observer-dependent. (Analogically, just because you and me can have different interpretations of ‘almost zero’, that does not mean mathematics is subjective and observer-depended. It just means that ‘almost zero’ is not exactly defined, but in real life we care whether e.g. the water we drink contains ‘almost zero’ poison.)
So generally for me it means that once someone famous says a wrong (or just ambiguous) explanation of MWI, that explanation will be forever used as an argument against anything similar to MWI.
This gives a reason why MWI is not a useful object of study.
Well, not quite. Someone ought to be thinking about this sort of stuff, and the claim that link makes is that MWI isn’t worth considering because it goes against the “scientific ethos.”
The reason I would tell people why MWI is not a useful object of study (for them) is because until you make it a disagreement about the territory, disagreeing about maps cashes out as squabbling. How you interpret QM should not matter, so don’t waste time on it.
This is important if true, and I would like to know more. What are the barriers?
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation. He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal. I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness? After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see. So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule? I agree that obviously what will end up working will depend on what the foundations are :) I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work. If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way. I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it. Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
Gerard ‘t Hooft actually does rely on what people call “superdeterminism”, which I just call “full determinism”, which I think is also a term ’t Hooft likes more. At least that is what his papers indicate. He discuss this some in a article from 2008 in response to Simon Kochen and John Conway’s Free Will Theorem. You might want to read the article: http://www.sciencenews.org/view/generic/id/35391/title/Math_Trek__Do_subatomic_particles_have_free_will%3F After that you might want to head on over to arxiv, ’t Hooft has published a 3 papers the last 6 months on this issue and he seem more and more certain of it. He also adress the objections in some notes in those papers. Link: http://arxiv.org/find/quant-ph/1/au:+Hooft_G/0/1/0/all/0/1
Depends on what you mean by rigorous. (OTOH, it’s not fully compatible with general relativity, so we know it doesn’t exactly describe the world—or that GR doesn’t, or that neither does.)
If you bug physicists enough, they will admit that the standard model has some problems, like the Landau pole. However, there are toy QFTs in 2 spacial dimension that have models rigorous enough for mathematicians. That should be adequate for philosophical purposes.
I don’t think the Landau pole can be characterized as an actual problem. It was considered a problem for strong interactions, but we now know that quantum chronodynamics is asymptotically free, so it does not have a Landau pole. The Landau pole for quantum electrodynamics is at an energy scale much much higher than the Planck energy. We already know that we need new physics at the Planck scale, so the lack of asymptotic freedom in the Standard Model is not a real practical (or even conceptual) problem.
The Landau pole for QED goes away when coupled with QCD, but I believe another one appears with the Higgs field.
If you don’t like the question I’m answering, complain to Komponisto, not me.
But what would you count as a conceptual problem?
I wasn’t complaining to anyone. And I don’t dislike the question. I was just adding some relevant information. Anyway, I did reply directly to komponisto as well. See the end of my long comment above.
If we did not have independent evidence that QFT breaks down at the Planck scale (since gravity is not renormalizable), I might have considered the Landau pole a conceptual problem for QFT. But since it is only a problem in a domain where we already know QFT doesn’t work, I don’t see it that way.
I don’t think that’s the normal use of “conceptual problem.”
If physicists believe, as their verbiage seems to indicate, that QED is a real theory that is an approximation to reality, and they compute approximations to the numbers in QED, while QED is actually inconsistent, I would say that is an error and a paradigmatic example of a conceptual error.
What does it mean to interpret an inconsistent theory?
There is the standard MWI advocacy that matches Elieser’s views. This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
The first critique seems to criticize something different that Eliezer says. It seems like the person quoted by the author did not express themselves clearly, and the critique takes a wrong explanation. For example this part:
For me the Eliezer’s explanation of “blobs of amplitude” makes sense. There is a set of possible configurations, which at the beginning are all very similar, but because some interactions make the differences grow, the set gradually separates into smaller subsets. When exactly? Well, in theory the parts are connected forever, but the connection only has epsilon size related to the subsets, so it can be ignored. But asking when exactly is like asking “what exactly is the largest number that can be considered ‘almost zero’?”. If you want to be exact, only zero is exactly zero. On the other hand, 1/3^^^3 is for all practical purposes zero. I would feel uncomfrotable picking one number and saying “ok, this X is ‘almost zero’, but 1.000001 X is not ‘almost zero’”.
The quoted person seems to say something similar, just less clearly, which allows the critic to use the word “subjective” and jump to a wrong conclusion that author is saying that mathematics is observer-dependent. (Analogically, just because you and me can have different interpretations of ‘almost zero’, that does not mean mathematics is subjective and observer-depended. It just means that ‘almost zero’ is not exactly defined, but in real life we care whether e.g. the water we drink contains ‘almost zero’ poison.)
So generally for me it means that once someone famous says a wrong (or just ambiguous) explanation of MWI, that explanation will be forever used as an argument against anything similar to MWI.
Well, not quite. Someone ought to be thinking about this sort of stuff, and the claim that link makes is that MWI isn’t worth considering because it goes against the “scientific ethos.”
The reason I would tell people why MWI is not a useful object of study (for them) is because until you make it a disagreement about the territory, disagreeing about maps cashes out as squabbling. How you interpret QM should not matter, so don’t waste time on it.
Tell that to EY.