I would be far more careful using quantum physics in informal “philosophical” arguments. In most instances, people summon quantum effects to create a feeling of answered question, while in fact the answer is confused or, worse, not an asnwer at all. The general rule is: every philosophical argument using the word quantum is bogus. (Take with a grain of salt, of course.)
More concretely, closed quantum systems (i.e. when no measurement is done) evolve deterministically, and their evolution can be periodic.
I thought that in closed quantum system there are only probabilities of a true indeterminisitc nature—and the only deterministic part is at the collapse of the wave function (where the positions, speed,… are truly determined—but impossible to measure correctly).
Still the fact remains that one universe is holding observers and even there is only one sollution to past eternity—that of a cyclic universe of the same kind and same parameters of the big bang—the futures of the universe would be determined by the acts of those observers.
Different acts of observing—different universes in series (but strictly with the same physical constants).
All the consequences of observing in those universes would so have to be realized.
I thought that in closed quantum system there are only probabilities of a true indeterminisitc nature—and the only deterministic part is at the collapse of the wave function (where the positions, speed,… are truly determined—but impossible to measure correctly).
Mostly the opposite. In a closed quantum system, there are no probabilities, just the unitary, deterministic evolution of the wavefunction. On a measurement (which is a particular type of interaction with something outside the system), the collapse happens, and it is at this point that both probabilities and nondeterminism are both introduced. Whatever property is being observed sets an eigenbasis for the measurement. Each eigenspace is assigned a probability of being chosen proportional to the norm—the sum of the square of the lengths. This probability is the probability that the wavefunction is replaced by the renormalized projection of that wavefunction into the chosen eigenspace.
(This is the simplest version—it only covers von Neumann measurements in the Schrodinger picture applied to pure states.)
On a measurement (which is a particular type of interaction with something
outside the system), the collapse happens, and it is at this point that both
probabilities and nondeterminism are both introduced.
That’s not very “MWI” of you! “Collapse” currently has the status of a fantasy which is unsupported by any evidence.
Agreed—MWI (many-worlds interpretation) does not have any “collapse”: Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters—and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate tend to refer to parsimony.)
I find it somewhat strange that people who have problems with “all those huge numbers of worlds in MWI” don’t have much of a problem with “all those huge numbers of stars and galaxies” in our conventional view of the cosmos—and it doesn’t cause them to reach for a theory which has a more complicated basic description but gets rid of all that huge amount of stuff. When did any of us last meet anyone who claimed that “the backs of objects don’t exist, except those being observed directly or indirectly by humans because it is more parsimonious not to have them there, even if you need a contrived theory to do away with them”? That’s the problem with arguing against MWI: To reduce the “amount of stuff in reality”—which never normally bothers us with theories, and shouldn’t now, you have to introduce contrivance where it is really a bad idea—into the basic theory itself—by introducing some mechanism for “collapse”.
Somehow, with all this, there is some kind of cognitive illusion going on. As I don’t experience it, I can’t identify with it and have no idea what it is.
And my problem is that questions like this are heavily downvoted. This isn’t a bad question per se, even if it may be a little bit confused. As I understand, only a minority of people here are physicists, and quite a lot of people on LW haven’t technical understanding of quantum theory. So the parent comment can’t be perceived as ignorant of some already shared standard of rationality. Also, MWI is still not a broad scientific consensus today, even if some portray it such. So why does the parent stand at −5? Do we punish questioning the MWI? If so, why?
Now on topic. MWI doesn’t violate thermodynamics any more than the Copenhagen interpretation. In the CI one can have a superposition of states of different energy collapsing into one of the involved energies; the estimated (mean) energy of the state is not conserved through the measurement.
The energy is conserved in two senses: first, it is conserved during the evolution of a closed system (without measurement), and second, it is conserved completely when using statistical mixed states to model the system—in this case, the collapse puts the system into a mixed state, and the mean value of any observable survives the collapse without change. Of course, the energy conservation requires time-independent dynamics (it means time-independence of the laws governing the system and all physical constants) in both cases.
An important technical point is that measurements always transfer the energy to the apparatus and therefore there is little sense to demand conservation of energy of the measured system during a measurement. To model a realistic measurement, the apparatus has to be described by a non-self-adjoint Hamiltonian to effectively describe dissipation, or at least it has to have a time-dependent Hamiltonian, or both; else, the apparatus will not remember the results. In both cases, energy conservation is trivially broken.
As for the (implicit) first question how the worlds are created: There is one Hilbert space consisting of all possible state vectors of the world. The state of the world can be, in a rough idealisation, decomposed into a tensor-product of smaller states of individual observers and non-observer subsystem (whether a subsystem is or isn’t an observer is not particularly important, and it is probably related to the problem of consciousness). In a subspace of a particular observer, some states are specific, while most of the states aren’t. The specific states correspond to certain thoughts. In an idealisation of an observer who cares only about one particular physical system, the observer’s specific states all correspond to states of the system, which are said to have a sharp value of certain observables.
Now, in the Schrödinger picture, all state vectors evolve. Interaction between the observer and the observed system takes the state vectors into correlation. After that, the overall state vector of the observer+system compound can’t be written as a tensor product of a observer-vector and a system-vector, and thus talking about the state of the observer alone doesn’t make sense any more.
The consciousness of the observer works in such a way, that it decomposes the state of the observer+system into sum of vectors, each of which can be written as a tensor product of an observer-vector and a system-vector (although entire the sum can’t), and lives a separate instance on each summand. Each of this instances forms what is called a world in the MWI jargon.
These worlds thus aren’t created from void by some physical action. It’s perhaps better to say that they are “interpreted into existence” by individual observers’ consciousnesses. The division of the whole universe into individual worlds is observer dependent.
These worlds aren’t being “created out of nowhere” as people imagine it. They are only called worlds because they are regions of the wavefunction which don’t interact with other regions. It is the same wavefunction, and it is just being “sliced more thinly”. To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world “splits” into two worlds, it makes sense to think of it as meaning that the “stuff” (actually the wavefunction) making up that world is divided up and used to make two new, slightly different worlds. There is no new “stuff” being created. Both worlds actually co-exist in the same space even: It is only their decoherence from each other that prevents interaction. You said that your problem is “how they (the worlds) are created” but there isn’t anything really anything new being created. Rather, parts of reality are ceasing interaction with each other and there is no mystery about why this should be the case: Decoherence causes it.
Well, it isn’t really about what I think, but about what MWI is understood to say.
According to MWI, the worlds are being “sliced more thinly” in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a “sort of yes” but I will qualify that shortly.
For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent from each other there is no interaction and each part is regarded as a separate world.
Now, to qualify this: Branches may actually interfere with each other in ways that aren’t really meaningful, so there isn’t really a point where you get total decoherence. You do get to a stage though where decoherence has occurred for practical purposes.
To all intents and purposes, it should be regarded as definite and objective.
the total energy of each [world] depends on its probability measure
Please check your sources on MWI. I think you must be misreading them.
Branches may actually interfere with each other in ways that aren’t really meaningful, so there isn’t really a point where you get total decoherence.
So in reality, decoherence is a matter of degree. But I thought that the existence of one world or many worlds depended on whether decoherence had occurred. Is there a threshold value, a special amount of decoherence which marks the transition?
it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe’s history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn’t change the fact that conservation of energy appears to apply in each world: Observers in a world aren’t directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.
Advocates of MWI generally say that a split has occurred when a measurement is performed indicating that it has observed. It should also be noted that when it is said that “interference has stopped occurring” it really means “meaningful” interference—the interference still occurs but is just random noise, so you can’t notice it. (To use an extreme example, that’s supposed to be why you can’t see anyone in a world where the Nazis won WWII: That part of the wavefunction is so decoherent from yours that any interference is just random noise and there is therefore no meaningful interference. This should answer the question: As decoherence increases, the interaction gets more and more towards randomness and eventually becomes of no relevance to you.)
If a world splits, the energy of each split-off world would have to be less than the original world.
No, you are misunderstanding the argument. I am a MWI opponent but I know you are getting this wrong. If we switch to orthodox QM for a moment, and ask what the energy of a generic superposition is, the closest thing to an answer is to talk about the expectation value of the energy observable for that wavefunction. This is a weighted average of the energy eigenvalues appearing in the superposition. For example, for the superposition 1/sqrt(2) |E=E1> + 1/sqrt(2) |E=E2>, the expectation value is E1/2 + E2/2. What Q22 in the Everett FAQ is saying is that the expectation value won’t apriori increase, even if new worlds are being created within the wavefunction, because the expectation value is the weighted average of the energies of the individual worlds; and in fact the expectation value will not change at all (something you can prove in a variety of ways).
To use an extreme example, that’s supposed to be why you can’t see anyone in a world where the Nazis won WWII: That part of the wavefunction is so decoherent from yours that any interference is just random noise and there is therefore no meaningful interference.
Well, this is another issue where, if I was talking to a skilled MWI advocate, I might be able to ask some probing questions, because there is a potential inconsistency in the application of these concepts. Usually when we talk about interference between branches of the wavefunction, it means that there are two regions in (say) configuration space, each of which has some amplitude, and there is some flow of probability amplitude from one region into the other. But this flow does not exist at the level of configurations, it only occurs at the level of configuration amplitudes. So if “my world”, “this world”, where the Nazis lost, is one configuration, and the world where the Nazis won is another configuration, there is no way for our configuration to suddenly resemble the other configuration on account of such a flow—that is a confusion of levels.
For me to observe interference phenomena, I have to be outside the superposition. But I wasn’t even born when WWII was decided, so I am intrinsically stuck in one branch. Maybe this is a quibble; we could talk about something that happened after my birth, like the 2000 US election. I live in a world where Bush won; but in principle could I see interference from a world where Gore won? I still don’t think it makes sense; the fact that I remember Bush winning means that I’m in that branch; I would have to lose the memory for the probability flow here to come into contact with the probability flow in a branch where Gore won. More importantly, the whole world configuration would have to morph until it came to resemble a world where Gore won, for some portion of the probability flow “here” to combine with the probability flow there.
I’ll try to explain what I’m talking about. The wavefunction consists of a complex-valued function defined throughout configuration space. Configuration space consists of static total configurations of the universe. Change exists only at the level of the complex numbers; where they are large, you have a “peak” in the wavefunction, and these peaks move around in configuration space, split and join, and so on. So really, it ought to be a mistake to think of configurations per se as the worlds; instead, you should perhaps be thinking about the “peaks”, the local wavepackets in configuration space, as worlds. Except, a peak can have a spread in configuration space. A single peak can be more like a “ridge” stretching between configurations which are classically inconsistent. This already poses problems of interpretation, as does the lack of clear boundaries to a peak… Are we going to say that a world consists of any portion of the wavefunction centered on a peak—a local maximum—and bounded by regions where the gradient is flat??
But here I can only throw up my hands and express my chronic exasperation with the fuzzy thinking behind many worlds. It is impossible to intelligently critique an idea when the exponent of the idea hasn’t finished specifying it and doesn’t even realize that they need to do more work. And then you have laypeople who take up the unfinished idea and advocate it, who are even more oblivious to the problems, and certainly incapable of answering them.
Paul, if I could convey to you one perspective on MWI, it would be as follows: Most people who talk about MWI do not have an exact definition of what a world is. Instead, it’s really an ideology, or a way of speaking: QM has superpositions in it, and the slogan is that everything in the superposition is real. But if this is to be an actual theory of the world, and not just an idea for a theory, you have to be more concrete. You have to say exactly what parts of a wavefunction are the worlds. And when you do this, you face new problems, e.g. to do with relativity and probability. The exact nature of the problems depends on how the MWI idea is concretized. But if you give me any concrete, detailed version of MWI, I can tell you what’s wrong with it.
First, let me say beautifully clear explanation of what MWI is and especially what questions it needs to answer.
Except, a peak can have a spread in configuration space. A single peak can be more like a “ridge” stretching between configurations which are classically inconsistent. This already poses problems of interpretation, as does the lack of clear boundaries to a peak… Are we going to say that a world consists of any portion of the wavefunction centered on a peak—a local maximum—and bounded by regions where the gradient is flat??
I don’t think this is any more unreasonable than talking about firing two separate localized wave-packets at each other and watching them interfere, even if we don’t have a specific fixed idea of what in full generality counts as a “wave-packet”. Typically, of course, for linear wave equations we’d use Gaussians as models, but I don’t think that’s more than a mathematically convenient exemplar.
For non-linear models, (e.g. KdV) we have soliton solutions that have rather different properties, such as being self-focusing, rather than spreading out. I guess I don’t see why it matters whether you have an exact definition for “world” or not—so long as you can plausibly exhibit them.
The question in my mind is whether evolution on configuration space preserves wave-packet localization, or under what conditions they could develop. I find it hard to even formalize this, but given that we have a linear wave-equation, I would tend to doubt they do.
e.g. to do with relativity
Of course relativity will be an issue. QM is not Einsteinian relativistic, only Galilean (relabeling phases properly gives a Galilean boost), and that’s baked into the standard operators and evolution.
I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the “original world”. If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.
Regarding the second issue, defining what a world is, I actually do understand your point: I feel that you think I understand less on this than is actually the case. Nevertheless, I would say that getting rid of a need for collapse does mean a lot and removes a lot of issues: more than are added with the “What constitutes a world” issue. However, we probably do need a “more-skilled MWI advocate” to deal with that.
I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the “original world”. If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds
Let me see if I am understanding you. You’re now saying that the average energy-per-world goes down, “due to the proliferation of worlds”? Because that still isn’t right.
The simplest proof that the average energy is conserved is that energy eigenstates are stationary states: subjected to Hamiltonian evolution, they don’t change except for a phase factor. So if your evolving wavefunction is Psi(t), expressed in a basis of energy eigenstates it becomes sum_k c_k exp(-i . E_k . t) |E_k>. I.e. the time dependence is only in the coefficients of the energy eigenstates, and there’s no variation in their norm (since the time dependence is only in the phase factor), so the probability weightings of the energy eigenstates also don’t change. Therefore, the expectation value of the energy is a constant.
There ought to be a “local” proof of energy conservation as well (at least, if we were working with a field theory), and it might be possible to insightfully connect that with decoherence in some way—that is, in a way which made clear that decoherence, the process which is supposed to be giving rise to world-splits, also conserves energy however you look at it—but that would require a bit more thought on my part.
ETA: Dammit, how do you do subscripts in markdown? :-)
No, I think you are misunderstanding me here. I wasn’t claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn’t make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn’t really unreasonable, because it is merely a wave developing over time—we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, the energy associated with each world to decrease as we have a constant amount of energy in the wavefunction and the number of worlds is increasing. If you have some way of defining worlds, and you n worlds, and then later have one billion x n worlds, and you have some way of allocating energy to a world, then this would have to happen to maintain conservation of energy. Also, I’m not claiming that the issue is best dealt with in terms of “energy per world” either.
Now you are saying what I first thought you might have meant. :-) Namely, you are talking about the energy of the wavefunction as if it were itself a field. In a way, this brings out some of the difficulties with MWI and the common assertion that MWI results from taking the Schrodinger equation literally.
It’s a little technical, but possibly the essence of what I’m talking about is to be found by thinking about Noether’s theorem. This is the theorem which says that symmetries lead to conserved quantities such as energy. But the theorem is really built for classical physics. Ward identities are the quantum counterpart, but they work quite differently, because (normally) the wavefunction is not treated as if it is a field, it is treated as a quasiprobability distribution on the physical configuration space. In effect, you are talking about the energy of the wavefunction as if the classical approach, Noether’s theorem, was the appropriate way to do so.
There are definitely deep issues here because quantum field theory is arguably built on the formal possibility of treating a wavefunction as a field. The Dirac equation was meant to be the wavefunction of a single particle, but to deal with the negative-energy states it was instead treated as a field which itself had to be quantized (this is called “second quantization”). Thus was born quantum field theory and the notion of particles as field quanta.
MWI seems to be saying, let’s treat configuration space as a real physical space, and regard the second-quantized Schrodinger equation as defining a field in that space. If you could apply Noether’s theorem to that field in the normal way (ignoring the peculiarity that configuration space is infinite-dimensional), and somehow derive the Ward identities from that, that would be a successful derivation of orthodox quantum field theory from the MWI postulate. But skeptical as I am, I think this might instead be a way to illuminate from yet another angle why MWI is so problematic or even unviable. Right away, for example, MWI’s problem with relativity will come up.
Anyway, that’s all rather esoteric, but the bottom line is that you don’t use this “Noetherian configuration-space energy” in quantum mechanics, you use a concept of energy which says that energy is a property of the individual configurations. And this is why there’s no issue of “allocating energy to a world” from a trans-world store of energy embodied in the wavefunction.
How do You reconcile MWI with the 1st Law of thermodynamics?
Why do you see a conflict? You seem to be assuming both that the total energy of the universe is positive (not known!), and that each universe has the same total energy (i.e. that energy is not arbitrarily scalable). Why not assume that a universe with 100 joules of energy splits into two universes—one with 100 zorkmids of energy and the other with 100 arkbarks of energy (where it is understood that 1 zorkmid + 1 arkbark = 1 joule).
The post shows that even intellectual high-fliers like EY and Hanson can err on this topic (if it is to be believed that they were wrong). In other words, I wouldn’t recommend asking questions made up of terms that are themselves poorly understood even by the best minds within this community.
Well, no, that language is not. But it’s the standard language. Of all the interpretations, MWI makes the most sense to me, but quantum mechanics really is “merely” a very good effective model. (See the conflict between SR and QM. QFT neatly dodges some obstacles, but has even more horrendous interpretational issues. And we can only barely torture answers out of it in some limited cases in curved spacetime.)
Even so—there is a nondeterministic variable in our universe.
Even if a cyclic model is true, which allows for only for one kind of parameter setup for the big bang, the nondeterministic variables would certainly lead to different outcomes in different cycles.
Hence, all the scenarios, that arise from these nondeterministic possibilities, would have to be realized.
I would be far more careful using quantum physics in informal “philosophical” arguments. In most instances, people summon quantum effects to create a feeling of answered question, while in fact the answer is confused or, worse, not an asnwer at all. The general rule is: every philosophical argument using the word quantum is bogus. (Take with a grain of salt, of course.)
More concretely, closed quantum systems (i.e. when no measurement is done) evolve deterministically, and their evolution can be periodic.
I thought that in closed quantum system there are only probabilities of a true indeterminisitc nature—and the only deterministic part is at the collapse of the wave function (where the positions, speed,… are truly determined—but impossible to measure correctly).
Still the fact remains that one universe is holding observers and even there is only one sollution to past eternity—that of a cyclic universe of the same kind and same parameters of the big bang—the futures of the universe would be determined by the acts of those observers. Different acts of observing—different universes in series (but strictly with the same physical constants).
All the consequences of observing in those universes would so have to be realized.
Mostly the opposite. In a closed quantum system, there are no probabilities, just the unitary, deterministic evolution of the wavefunction. On a measurement (which is a particular type of interaction with something outside the system), the collapse happens, and it is at this point that both probabilities and nondeterminism are both introduced. Whatever property is being observed sets an eigenbasis for the measurement. Each eigenspace is assigned a probability of being chosen proportional to the norm—the sum of the square of the lengths. This probability is the probability that the wavefunction is replaced by the renormalized projection of that wavefunction into the chosen eigenspace.
(This is the simplest version—it only covers von Neumann measurements in the Schrodinger picture applied to pure states.)
That’s not very “MWI” of you! “Collapse” currently has the status of a fantasy which is unsupported by any evidence.
Agreed—MWI (many-worlds interpretation) does not have any “collapse”: Instead parts of the wavefunction merely become decoherent with each other which might have the appearance of a collapse locally to observers. I know this is controversial, but I think the evidence is overwhelmingly in favor of MWI because it is much more parsimonious than competing models in the sense that really matters—and the only sense in which the parsimony of a model could really be coherently described. (It is kind of funny that both sides of the MWI or !MWI debate tend to refer to parsimony.)
I find it somewhat strange that people who have problems with “all those huge numbers of worlds in MWI” don’t have much of a problem with “all those huge numbers of stars and galaxies” in our conventional view of the cosmos—and it doesn’t cause them to reach for a theory which has a more complicated basic description but gets rid of all that huge amount of stuff. When did any of us last meet anyone who claimed that “the backs of objects don’t exist, except those being observed directly or indirectly by humans because it is more parsimonious not to have them there, even if you need a contrived theory to do away with them”? That’s the problem with arguing against MWI: To reduce the “amount of stuff in reality”—which never normally bothers us with theories, and shouldn’t now, you have to introduce contrivance where it is really a bad idea—into the basic theory itself—by introducing some mechanism for “collapse”.
Somehow, with all this, there is some kind of cognitive illusion going on. As I don’t experience it, I can’t identify with it and have no idea what it is.
My problem with MWI is not the massive amounts of worlds—but how they are created.
How do You reconcile MVI with the 1st Law of thermodynamics?
And my problem is that questions like this are heavily downvoted. This isn’t a bad question per se, even if it may be a little bit confused. As I understand, only a minority of people here are physicists, and quite a lot of people on LW haven’t technical understanding of quantum theory. So the parent comment can’t be perceived as ignorant of some already shared standard of rationality. Also, MWI is still not a broad scientific consensus today, even if some portray it such. So why does the parent stand at −5? Do we punish questioning the MWI? If so, why?
Now on topic. MWI doesn’t violate thermodynamics any more than the Copenhagen interpretation. In the CI one can have a superposition of states of different energy collapsing into one of the involved energies; the estimated (mean) energy of the state is not conserved through the measurement.
The energy is conserved in two senses: first, it is conserved during the evolution of a closed system (without measurement), and second, it is conserved completely when using statistical mixed states to model the system—in this case, the collapse puts the system into a mixed state, and the mean value of any observable survives the collapse without change. Of course, the energy conservation requires time-independent dynamics (it means time-independence of the laws governing the system and all physical constants) in both cases.
An important technical point is that measurements always transfer the energy to the apparatus and therefore there is little sense to demand conservation of energy of the measured system during a measurement. To model a realistic measurement, the apparatus has to be described by a non-self-adjoint Hamiltonian to effectively describe dissipation, or at least it has to have a time-dependent Hamiltonian, or both; else, the apparatus will not remember the results. In both cases, energy conservation is trivially broken.
As for the (implicit) first question how the worlds are created: There is one Hilbert space consisting of all possible state vectors of the world. The state of the world can be, in a rough idealisation, decomposed into a tensor-product of smaller states of individual observers and non-observer subsystem (whether a subsystem is or isn’t an observer is not particularly important, and it is probably related to the problem of consciousness). In a subspace of a particular observer, some states are specific, while most of the states aren’t. The specific states correspond to certain thoughts. In an idealisation of an observer who cares only about one particular physical system, the observer’s specific states all correspond to states of the system, which are said to have a sharp value of certain observables.
Now, in the Schrödinger picture, all state vectors evolve. Interaction between the observer and the observed system takes the state vectors into correlation. After that, the overall state vector of the observer+system compound can’t be written as a tensor product of a observer-vector and a system-vector, and thus talking about the state of the observer alone doesn’t make sense any more.
The consciousness of the observer works in such a way, that it decomposes the state of the observer+system into sum of vectors, each of which can be written as a tensor product of an observer-vector and a system-vector (although entire the sum can’t), and lives a separate instance on each summand. Each of this instances forms what is called a world in the MWI jargon.
These worlds thus aren’t created from void by some physical action. It’s perhaps better to say that they are “interpreted into existence” by individual observers’ consciousnesses. The division of the whole universe into individual worlds is observer dependent.
These worlds aren’t being “created out of nowhere” as people imagine it. They are only called worlds because they are regions of the wavefunction which don’t interact with other regions. It is the same wavefunction, and it is just being “sliced more thinly”. To an observer, able to look at this from outside, there would just be the wavefunction, with parts that have decohered from each other, and that is it. To put it another way, when a world “splits” into two worlds, it makes sense to think of it as meaning that the “stuff” (actually the wavefunction) making up that world is divided up and used to make two new, slightly different worlds. There is no new “stuff” being created. Both worlds actually co-exist in the same space even: It is only their decoherence from each other that prevents interaction. You said that your problem is “how they (the worlds) are created” but there isn’t anything really anything new being created. Rather, parts of reality are ceasing interaction with each other and there is no mystery about why this should be the case: Decoherence causes it.
Do you think the number of worlds is a definite and objective fact, or that it depends on how you slice the wavefunction?
Well, it isn’t really about what I think, but about what MWI is understood to say.
According to MWI, the worlds are being “sliced more thinly” in the sense that the total energy of each depends on its probability measure, and when a world splits its probability measure, and therefore energy, is shared out among the worlds into which it splits. The answer to your question is a “sort of yes” but I will qualify that shortly.
For practical purposes, it is a definite and objective fact. When two parts of the wavefunction have become decoherent from each other there is no interaction and each part is regarded as a separate world.
Now, to qualify this: Branches may actually interfere with each other in ways that aren’t really meaningful, so there isn’t really a point where you get total decoherence. You do get to a stage though where decoherence has occurred for practical purposes.
To all intents and purposes, it should be regarded as definite and objective.
Please check your sources on MWI. I think you must be misreading them.
So in reality, decoherence is a matter of degree. But I thought that the existence of one world or many worlds depended on whether decoherence had occurred. Is there a threshold value, a special amount of decoherence which marks the transition?
it sounds like you might have issues with what looks like a violation of conservation of energy over a single universe’s history. If a world splits, the energy of each split-off world would have to be less than the original world. That doesn’t change the fact that conservation of energy appears to apply in each world: Observers in a world aren’t directly measuring the energy of the wavefunction, but instead they are measuring the energy of things like particles which appear to exist as a result of the wavefunction.
Advocates of MWI generally say that a split has occurred when a measurement is performed indicating that it has observed. It should also be noted that when it is said that “interference has stopped occurring” it really means “meaningful” interference—the interference still occurs but is just random noise, so you can’t notice it. (To use an extreme example, that’s supposed to be why you can’t see anyone in a world where the Nazis won WWII: That part of the wavefunction is so decoherent from yours that any interference is just random noise and there is therefore no meaningful interference. This should answer the question: As decoherence increases, the interaction gets more and more towards randomness and eventually becomes of no relevance to you.)
I suggest these resources.
Orze, C., 2008. Many-Worlds and Decoherence: There Are No Other Universes. [Online] ScienceBlogs. Available at: http://scienceblogs.com/principles/2008/11./manyworlds_and_decoherence.php [Accessed 22 August 2010].
Price, M,C., 1995. The Everett FAQ. [Online] The Hedonistic Imperative. Available at: http://www.hedweb.com/manworld.htm [Accessed 22 August 2010].
No, you are misunderstanding the argument. I am a MWI opponent but I know you are getting this wrong. If we switch to orthodox QM for a moment, and ask what the energy of a generic superposition is, the closest thing to an answer is to talk about the expectation value of the energy observable for that wavefunction. This is a weighted average of the energy eigenvalues appearing in the superposition. For example, for the superposition 1/sqrt(2) |E=E1> + 1/sqrt(2) |E=E2>, the expectation value is E1/2 + E2/2. What Q22 in the Everett FAQ is saying is that the expectation value won’t apriori increase, even if new worlds are being created within the wavefunction, because the expectation value is the weighted average of the energies of the individual worlds; and in fact the expectation value will not change at all (something you can prove in a variety of ways).
Well, this is another issue where, if I was talking to a skilled MWI advocate, I might be able to ask some probing questions, because there is a potential inconsistency in the application of these concepts. Usually when we talk about interference between branches of the wavefunction, it means that there are two regions in (say) configuration space, each of which has some amplitude, and there is some flow of probability amplitude from one region into the other. But this flow does not exist at the level of configurations, it only occurs at the level of configuration amplitudes. So if “my world”, “this world”, where the Nazis lost, is one configuration, and the world where the Nazis won is another configuration, there is no way for our configuration to suddenly resemble the other configuration on account of such a flow—that is a confusion of levels.
For me to observe interference phenomena, I have to be outside the superposition. But I wasn’t even born when WWII was decided, so I am intrinsically stuck in one branch. Maybe this is a quibble; we could talk about something that happened after my birth, like the 2000 US election. I live in a world where Bush won; but in principle could I see interference from a world where Gore won? I still don’t think it makes sense; the fact that I remember Bush winning means that I’m in that branch; I would have to lose the memory for the probability flow here to come into contact with the probability flow in a branch where Gore won. More importantly, the whole world configuration would have to morph until it came to resemble a world where Gore won, for some portion of the probability flow “here” to combine with the probability flow there.
I’ll try to explain what I’m talking about. The wavefunction consists of a complex-valued function defined throughout configuration space. Configuration space consists of static total configurations of the universe. Change exists only at the level of the complex numbers; where they are large, you have a “peak” in the wavefunction, and these peaks move around in configuration space, split and join, and so on. So really, it ought to be a mistake to think of configurations per se as the worlds; instead, you should perhaps be thinking about the “peaks”, the local wavepackets in configuration space, as worlds. Except, a peak can have a spread in configuration space. A single peak can be more like a “ridge” stretching between configurations which are classically inconsistent. This already poses problems of interpretation, as does the lack of clear boundaries to a peak… Are we going to say that a world consists of any portion of the wavefunction centered on a peak—a local maximum—and bounded by regions where the gradient is flat??
But here I can only throw up my hands and express my chronic exasperation with the fuzzy thinking behind many worlds. It is impossible to intelligently critique an idea when the exponent of the idea hasn’t finished specifying it and doesn’t even realize that they need to do more work. And then you have laypeople who take up the unfinished idea and advocate it, who are even more oblivious to the problems, and certainly incapable of answering them.
Paul, if I could convey to you one perspective on MWI, it would be as follows: Most people who talk about MWI do not have an exact definition of what a world is. Instead, it’s really an ideology, or a way of speaking: QM has superpositions in it, and the slogan is that everything in the superposition is real. But if this is to be an actual theory of the world, and not just an idea for a theory, you have to be more concrete. You have to say exactly what parts of a wavefunction are the worlds. And when you do this, you face new problems, e.g. to do with relativity and probability. The exact nature of the problems depends on how the MWI idea is concretized. But if you give me any concrete, detailed version of MWI, I can tell you what’s wrong with it.
First, let me say beautifully clear explanation of what MWI is and especially what questions it needs to answer.
I don’t think this is any more unreasonable than talking about firing two separate localized wave-packets at each other and watching them interfere, even if we don’t have a specific fixed idea of what in full generality counts as a “wave-packet”. Typically, of course, for linear wave equations we’d use Gaussians as models, but I don’t think that’s more than a mathematically convenient exemplar. For non-linear models, (e.g. KdV) we have soliton solutions that have rather different properties, such as being self-focusing, rather than spreading out. I guess I don’t see why it matters whether you have an exact definition for “world” or not—so long as you can plausibly exhibit them. The question in my mind is whether evolution on configuration space preserves wave-packet localization, or under what conditions they could develop. I find it hard to even formalize this, but given that we have a linear wave-equation, I would tend to doubt they do.
Of course relativity will be an issue. QM is not Einsteinian relativistic, only Galilean (relabeling phases properly gives a Galilean boost), and that’s baked into the standard operators and evolution.
I do admit to over-generalizing in saying that when a world splits, the split-off worlds each HAVE to have lower energy than the “original world”. If we measure the energy associated with the wavefunction for individual worlds, on average, of course, this would have to be the case, due to the proliferation of worlds: However, I do understand, and should have stated, that all that matters is that the total energy for the system remains constant over time, and that probabilities matter.
Regarding the second issue, defining what a world is, I actually do understand your point: I feel that you think I understand less on this than is actually the case. Nevertheless, I would say that getting rid of a need for collapse does mean a lot and removes a lot of issues: more than are added with the “What constitutes a world” issue. However, we probably do need a “more-skilled MWI advocate” to deal with that.
Let me see if I am understanding you. You’re now saying that the average energy-per-world goes down, “due to the proliferation of worlds”? Because that still isn’t right.
The simplest proof that the average energy is conserved is that energy eigenstates are stationary states: subjected to Hamiltonian evolution, they don’t change except for a phase factor. So if your evolving wavefunction is Psi(t), expressed in a basis of energy eigenstates it becomes sum_k c_k exp(-i . E_k . t) |E_k>. I.e. the time dependence is only in the coefficients of the energy eigenstates, and there’s no variation in their norm (since the time dependence is only in the phase factor), so the probability weightings of the energy eigenstates also don’t change. Therefore, the expectation value of the energy is a constant.
There ought to be a “local” proof of energy conservation as well (at least, if we were working with a field theory), and it might be possible to insightfully connect that with decoherence in some way—that is, in a way which made clear that decoherence, the process which is supposed to be giving rise to world-splits, also conserves energy however you look at it—but that would require a bit more thought on my part.
ETA: Dammit, how do you do subscripts in markdown? :-)
ETA 2: Found the answer.
No, I think you are misunderstanding me here. I wasn’t claiming that proliferation of worlds CAUSES average energy per-world to go down. It wouldn’t make much sense to do that, because it is far from certain that the concept of a world is absolutely defined (a point you seem to have been arguing). I was saying that the total energy of the wavefunction remains constant (which isn’t really unreasonable, because it is merely a wave developing over time—we should expect that.) and I was saying that a CONSEQUENCE of this is that we should expect, on average, the energy associated with each world to decrease as we have a constant amount of energy in the wavefunction and the number of worlds is increasing. If you have some way of defining worlds, and you n worlds, and then later have one billion x n worlds, and you have some way of allocating energy to a world, then this would have to happen to maintain conservation of energy. Also, I’m not claiming that the issue is best dealt with in terms of “energy per world” either.
Now you are saying what I first thought you might have meant. :-) Namely, you are talking about the energy of the wavefunction as if it were itself a field. In a way, this brings out some of the difficulties with MWI and the common assertion that MWI results from taking the Schrodinger equation literally.
It’s a little technical, but possibly the essence of what I’m talking about is to be found by thinking about Noether’s theorem. This is the theorem which says that symmetries lead to conserved quantities such as energy. But the theorem is really built for classical physics. Ward identities are the quantum counterpart, but they work quite differently, because (normally) the wavefunction is not treated as if it is a field, it is treated as a quasiprobability distribution on the physical configuration space. In effect, you are talking about the energy of the wavefunction as if the classical approach, Noether’s theorem, was the appropriate way to do so.
There are definitely deep issues here because quantum field theory is arguably built on the formal possibility of treating a wavefunction as a field. The Dirac equation was meant to be the wavefunction of a single particle, but to deal with the negative-energy states it was instead treated as a field which itself had to be quantized (this is called “second quantization”). Thus was born quantum field theory and the notion of particles as field quanta.
MWI seems to be saying, let’s treat configuration space as a real physical space, and regard the second-quantized Schrodinger equation as defining a field in that space. If you could apply Noether’s theorem to that field in the normal way (ignoring the peculiarity that configuration space is infinite-dimensional), and somehow derive the Ward identities from that, that would be a successful derivation of orthodox quantum field theory from the MWI postulate. But skeptical as I am, I think this might instead be a way to illuminate from yet another angle why MWI is so problematic or even unviable. Right away, for example, MWI’s problem with relativity will come up.
Anyway, that’s all rather esoteric, but the bottom line is that you don’t use this “Noetherian configuration-space energy” in quantum mechanics, you use a concept of energy which says that energy is a property of the individual configurations. And this is why there’s no issue of “allocating energy to a world” from a trans-world store of energy embodied in the wavefunction.
A better question—how does the observed 1st law of thermodynamics arise from the laws of physics underpinning the many worlds?
Why do you see a conflict? You seem to be assuming both that the total energy of the universe is positive (not known!), and that each universe has the same total energy (i.e. that energy is not arbitrarily scalable). Why not assume that a universe with 100 joules of energy splits into two universes—one with 100 zorkmids of energy and the other with 100 arkbarks of energy (where it is understood that 1 zorkmid + 1 arkbark = 1 joule).
Erm, I can tell you less about physics than the creationist museum about evolution but I don’t think it applies to open systems.
Anyway, for some interesting information about thermodynamics go here:
I don’t understand the argument in that post, even now re-reading it over a year later. Do you? If not, why are you citing it?
The post shows that even intellectual high-fliers like EY and Hanson can err on this topic (if it is to be believed that they were wrong). In other words, I wouldn’t recommend asking questions made up of terms that are themselves poorly understood even by the best minds within this community.
Well, no, that language is not. But it’s the standard language. Of all the interpretations, MWI makes the most sense to me, but quantum mechanics really is “merely” a very good effective model. (See the conflict between SR and QM. QFT neatly dodges some obstacles, but has even more horrendous interpretational issues. And we can only barely torture answers out of it in some limited cases in curved spacetime.)
Even so—there is a nondeterministic variable in our universe.
Even if a cyclic model is true, which allows for only for one kind of parameter setup for the big bang, the nondeterministic variables would certainly lead to different outcomes in different cycles.
Hence, all the scenarios, that arise from these nondeterministic possibilities, would have to be realized.