The De Broglie-Bohm theory is a very interesting interpretation of quantum mechanics. The highlights of the theory are:
The wavefunction is treated as being real (just as in MWI—in fact the theory is compatible with MWI in some ways),
Particles are also real, and are guided deterministically by the wavefunction. In other words, it is a hidden variable theory.
At first it might seem to be a cop-out to assume the reality of both the wavefunction and of actual point particles. However, this leads to some very interesting conclusions. For example, you don’t have to assume wavefunction collapse (as per Copenhagen) but at the same time, a single preferred Universe exists (the Universe given by the configuration of the point particles). But that’s not all.
It very neatly explains double-slit diffraction and Bell’s experiments in a purely deterministic way using hidden variables (it is thus necessarily a non-local theory). It also explains the Born probabilities (the one thing that is missing from pure MWI; Elezier has alluded to this).
Among other things, De Broglie-Bohm theory allows quantum computers but doesn’t allow quantum immortality—in this theory if you shoot yourself in the head you really will die. You won’t suddenly be yanked into an alternate Universe.
The reason I’m mentioning it is because of experiments done by Yves Couder’s group (http://math.mit.edu/~bush/?page_id=484) who have managed to build a crude and approximate physical system that incidentally illustrates some of the properties of De Broglie-Bohm theory. They use oil droplets that generate waves and the resulting waves guide the droplets. Most importantly, the droplets have ‘path memory’, so if a droplet is directed towards a double slit, it can ‘interfere’ with itself and produce nice double-slit diffraction fringes. One of their experiments that was just in the news recently illustrated particle behavior very similar to what the Schrodinger equation predicts: http://math.mit.edu/~bush/?p=2679
Now, De Broglie-Bohm theory does not seem to be one of the more popular interpretations of QM, because of its non-locality (this doesn’t produce causal paradoxes like the Grandfather paradox, though, despite what some might say). However, in my opinion this is very unfair. Locality is just a relic from classical physics. I haven’t seen a single good argument why the eventual theory of everything should be local.
If you ascribe to MWI, locality is a reason to abandon De Broglie-Bohm theory, but a relatively minor one—instead, it’s the way it insists on neglecting the reality of the guide wave.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
If you take the guide wave to be the rules of the universe (a tack I’ve heard) then the rules of the universe contain civilizations—literally, not as hypothetical implications. Choosing to use timeless physics (the response I got) doesn’t change this.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
The particle position recovers the Born probabilities. (It even does so deterministically, unlike Objective Collapse theories.) The wave function encodes lots of information, but it’s the particle that moves our measuring device, and the measuring device that moves our brains. If we succeed in simplifying our theory only by giving up on saving the phenomenon, then our theory is too simple.
But once you decide you’re going to interpret the wave function as distributing probability among some set of orthogonal subspaces, you’re already compelled into the Born probabilities.
All you need to decide that you ought to do that is the general conclusion that the wavefunction represents some kind of reality-fluid. Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
But the phrase “reality fluid” is just a place-holder. It’s a black box labeled “whatever solves this here problem”. What we see is something particle-like, and it’s the dynamics relating our observations over time that complicates the story. As Schrödinger put it:
[T]he emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot[.]
One option is to try to find the simplest theory that explains away the particle-like appearance anthropically, which will get you an Everett-style (‘Many Worlds’-like) interpretation. Another option is to take the sudden intrusion of the Born probabilities as a brute law of nature, which will get you a von-Neumann-style (‘Collapse’-like) interpretation. The third option is to accept the particle-like appearance as real, but theorize that a more unitary underlying theory relates the Schrödinger dynamics to the observed particle, which will get you a de-Boglie-style (‘Hidden Variables’) interpretation. You’ll find Bohmian Mechanics more satisfying than Many Worlds inasmuch as you find MW’s anthropics hand-wavey or underspecified; and you’ll find BM more satisfying than Collapse inasmuch as you think Nature’s Laws are relatively simple, continuous, scalable, and non-anthropocentric.
If BM just said, ‘Well, the particle’s got to be real somehow, and the Born probabilities have to emerge from its interaction with a guiding wave somehow, but we don’t know how that works yet’, then its problems would be the same as MW’s. But BM can formally specify how “reality fluid” works, and in a less ad-hoc way than its rivals. So BM wins on that count.
Where it loses is in ditching locality and Special Relativity, which is a big cost. (It’s also kind of ugly and complicated, but it’s hard to count that against BM until we’ve seen a simpler theory that’s equally fleshed out re the Measurement Problem.)
Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation? If not, then where are physicists getting them from, since it’s not the QM dynamics?
I wouldn’t call Everett ‘Anthropic’ per se. I consider it an application of the Generalized Anti-Zombie Principle: Here you’ve got this structure that acts like it’s sapient†. Therefore, it is.
As for BM formally specifying how the reality fluid works… need I point out this this is 100% entirely backwards, being made of burdensome details?
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation?
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please. Already we can run many worlds side-by-side. The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
Edited to add †: by ‘acts like’ I mean ‘has the causal structure for it to be’
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please.
But many of the more-general lagrangians of particle physics are non-linear, in general there should be higher order, non-linear corrections. So Schrödinger is a single-particle/linearized approximation. What does this do for your view of many worlds? When we try to extend many worlds naively to QFTs we run into all sorts of weird problems (much of the universal wavefunction’s amplitude doesn’t have well defined particle number,etc). Shouldn’t we expect the ‘proper’ interpretation to generalize nicely to the full QFT framework?
What are you talking about? I’ve only taken one course in quantum field theory, but I’ve never heard of anything where quantum mechanics was not linear. Can you give me a citation? It seems to me that failure of linearity would either be irrelevant (superlinear case, low amplitudes) or so dominant that any linearity would be utterly irrelevant and the Born Probabilities wouldn’t even be a good approximation.
Also, by ‘the Schrodinger equation’ I didn’t mean the special form which is the fixed-particle Hamiltonian with pp/2m kinetic energy—I meant the general form -
i hbar (d/dt) Psi = Hamiltonian Psi
Note that the Dirac Equation is a special case of this general form of the Schrodinger Equation. MWI, ‘naive’ or not, has no trouble with variations in particle number.
I’m not sure what you mean by ‘anthropic per se’. Everett (MW) explains apparent quantum indeterminism anthropically, via indexical ignorance; our knowledge of the system as a whole is complete, but we don’t know where we in the system are at this moment. De Broglie (HV) explains apparent quantum indeterminism via factual ignorance; our knowledge of the system’s physical makeup is incomplete, and that alone creates the appearance of randomness. Von Neumann (OC) explains apparent quantum indeterminism realistically; the world just is indeterministic.
The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics. The difficulty of extracting the one from the other, of turning this into a single unified and predictive theory, just is the ‘Measurement’ Problem. Bohm is taking two distinct models and reifying mechanisms for each to produce an all-encompassing theory; maybe that’s useless or premature, but it’s clearly not a non sequitur, because the evidence for a genuine wave/particle dichotomy just is the evidence that makes scientists allow probabilistic digressions from the Schrödinger equation.
MW is not a finished theory until we see how it actually unifies the two, though I agree there are at least interesting and suggestive first steps in that direction. BM’s costs are obvious and clear and formalized, which is its main virtue. Our ability to compare those costs to other theories’ is limited so long as it’s the only finished product under evaluation, because it’s easy to look simple when you choose to only try to explain some of the data.
I see what you mean now about anthropism. Yes, ignorance is subjective. Incidentally, this is how it used to be back before quantum ever came up.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics
Historically, Born was way before Everett and even longer before decoherence, so that’s not exactly a shocker. Even in Born’s time it was understood that subspaces had only one way of adding up to 1 in a way that respects probability identities—I’d bet dollars to donuts that that was how he got the rule in the first place, rather than doing a freaking curve fit to experimental data. What was missing at the time was any way to figure out what the wavefunction was, between doing its wavefunctiony thing and collapse.
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI, requires much less bullet-biting than before it was introduced. There is still some, but those bullets are much chewier for me than any alternate rules of dynamics.
(incidentally, IIRC, Shminux, you hold the above quote but not MWI, which I find utterly baffling—if you want to explain the difference or correct me on your position, go ahead)
maybe that’s useless or premature, but it’s clearly not a non sequitur
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI
Well, you still need a host of ideas about how to actually interpret a diagonal density matrix. Because you don’t have Born probabilities as a postulate, you have this structure but no method for connecting it back to lab-measured values.
While it seems straightforward, its because many-world’s advocates are doing slight of hand. They use probabilities to build a theory (because lab experiments appear to be only describable probabilistically), and later they kick away that ladder but they want to keep all the structure that comes with it (density matrices,etc).
I know of many good expositions that start with the probabilities and use that to develop the form of the Schroedinger equation from Galilean relativity and cluster decomposition (Ballentine, parts of Weinberg).
I don’t know any good expositions that go the other way. There are reasons that Deutsch, Wallace,etc have spent so much time trying to develop Born probabilities in a many world’s context- because its an important problem.
Hold on a moment. What ladder is being kicked away here?
We’ve got observed probabilities. They’re the experimental results, the basis of the theory. The theory then explains this in terms of indexical ignorance (thanks, RobbBB). I don’t see a kicked ladder. Not every observed phenomenon needs a special law of nature to make it so.
Instead of specially postulating the Born Probabilities, elevating them to the status of a law of nature, we use it to label parts of the universe in much the same way as we notice, say, hydrogen or iron atoms - ‘oh, look, there’s that thing again’. In this case, it’s the way that sometimes, components of the wavefunction propagate such that different segments won’t be interfering with each other coherently (or in any sane basis, at all).
Also, about density matrices—what’s the problem? We’re still allowed to not know things and have subjective probabilities, even in MWI. Nothing in it suggests otherwise.
The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
That’s just regurgitating the teacher’s password. MWI does not even account for the radioactive decay. In other words, if you find the Schrodinger’s cat dead, how long has it been dead for?
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
As for radioactive decay, I’m not clear what you require of MWI here. The un-decayed state has amplitude which gradually diminishes, leaking into other states. When you look in a cat box, you become entangled with it.
If the states resulting from death at different times are distinguishable, then you can go ahead and distinguish them, and there’s your answer (or, if it could be done in principle but we’re not clever enough, then the answer is ‘I don’t know’, but for reasons that don’t really have bearing on the question).
Where it really gets interesting is if the states resulting from cat-death are quantum-identical. Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?‘. The answer is either ‘mu’, or ‘all of them’, depending on your taste in rejecting questions. The final result is the weighted sum of all of the possible times of death, and no one of them is correct.
Note that for this identical case to apply, nothing inside the box gets to be able to tell the time (see note), which pretty much rules out its being an actual cat.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
~~
Note: The issue is that of cramming time-differentiating states into one final state. The only way you can remove information like that is to put it somewhere else. If you have a common state that the cat falls into from a variety of others, then the radiation from the cat’s decays into this common state encodes this information. It will be lost to entropy, but that just falls under the aegis of ‘we’re not clever enough to get it back out’ again, and isn’t philosophically interesting.
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
Yeah, sorry, that was uncalled for.
The un-decayed state has amplitude which gradually diminishes, leaking into other states.
Right. And each of those uncountably many (well, finitely many for a finite cutoff or countably many for a finite box) states corresponds to a different time of death (modulo states with have the same time of death but different emitted particle momenta).
When you look in a cat box, you become entangled with it.
Yes, with all of those states.
If the states resulting from death are distinguishable at different times
They must be, since they result in different macroscopic effects (from the forensic time-of-death measurement).
Where it really gets interesting is if the states resulting from cat-death are literally, quantum-identical.
Yes, but in this case they are not.
Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?’.
Not at all. In the diffraction experiment you don’t distinguish between different paths, you sum over them.
The final result is the sum of all of the possible times of death, and no one of them is correct.
No, you measure the time pretty accurately, so wrong-tme states do not contribute.
Note that for this latter case to apply, nothing inside the box gets to be able to tell the time (cramming time-differentiating states into one final state would violate Liouville’s theorem or some quantum equivalent, the name of which slips my mind), which pretty much rules out its being an actual cat.
Not quite. If the cat does not interact with the rest of the world, the cat is a superposition of all possible decay states. (I am avoiding the objective collapse models here.) It’s pretty actual, except for having to be at near 0 K to avoid leaking information about its states via thermal radiation.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
Yes it will. But a different time in different “worlds”. Way too many of them.
The first few responses here boil down to the last response:
But a different time in different “worlds”. Way too many of them.
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Where it really gets interesting is if the states resulting from cat-death are literally, quantum-identical.
Yes, but in this case they are not.
Well, yes, because it’s a freaking cat. I had already dealt with the realistic case and was attempting to do something with the other one by explicitly invoking the premise even if it is absurd. The following pair of quote-responses (responding to the lines with ‘diffraction’ and ‘sum of all the possible’) was utterly unnecessary because they were in a conditional ‘if A then B’, and you had denied A.
Of course, one could decline to use a cat and substitute a system which can maintain coherence, in which case the premise is not at all absurd. This was rather what I was getting at, but I’d hoped that your ability to sphere the cow was strong enough to give a cat coherence.
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Well, if you are OK with the world branching infinitely many ways every infinitesimally small time interval in every infinitesimally small volume of space, then I guess you can count it as “the disposition”. This is not, however, the way MWI is usually presented.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction, then MWI says that all those components you were declining to collapse are still in the same world.
So, since you don’t go around collapsing the wavefunction into infinite variety of outcomes at every event of spacetime, MWI doesn’t call for that much branching.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
For example, suppose we take a freshly prepared atom in an excited state (it is simpler than radioactive decay). QFT says that its state evolves into a state in the Fock space which is a
ground states of the atom+excited states of the EM vacuum (a photon).
I mean “+” here loosely, to denote that it’s a linear combination of the product states with different momenta. The phase space of the photon includes all possible directions of momentum as well as anything else not constrained by the conservation laws. The original excited state of the atom is still there, as well as the original ground state of the EM field, but it’s basically lost in the phase space of all possible states.
Suppose there is also a detector surrounding the atom, which is sensitive to this photon (we’ll include the observer looking at the detector in the detector to avoid the Wigner’s friend discussion). Once the excitation of the field propagates far enough to reach the detector, the total state is evolved into
ground states of the atom + excited states of the detector.
So now the wave function of the original microscopic quantum system has “collapsed”, as far as the detector is concerned. (“decohered” is a better term, with less ontological baggage). I hope this is pretty uncontroversial, except maybe to a Bohmian, to Penrose, or to a proponent of objective collapse, but that’s a separate discussion.
So now we have at least as many worlds/branches as there were states in the Fock space. Some will differ by detection time, others by the photon direction, etc. The only thing limiting the number of branches are various cutoffs, like the detector size.
That’s right, but it doesn’t add up to what you said about spacetime being saturated with ‘world-branching’ events.
While the decay wave is propagating, for instance, nothing’s decohering. It’s only when it reaches the critically unstable system of the detector that that happens.
It’s only when it reaches the critically unstable system of the detector that that happens.
There is no single moment like that. if the distance from the atom to the detector is r and we prepare the atom at time 0, the interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on.
interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on
Depends on your framework, but it will actually start even earlier than that in a general QFT. The expectation will be non-zero for all times t. I suppose the physical interpretation is something like a local-fluctuation trips the detector.
Of course, commutators will be non-zero as locality requires.
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
I see that my short, simple answer didn’t really explain this, so I’ll try the longer version.
Under a collapse interpretation, when is it OK to collapse things and treat them probabilistically? When the quantum phenomena have become entangled with something with enough degrees of freedom that you’re never going to get coherent superposition back out (it’s decohered) (if you do it earlier than this, you lose the coherent superpositions and you get two one-slit patterns added to each other and that’s all wrong)
This is also the same criterion for when you consider worlds to diverge in MWI. Therefore, in a two-slit experiment you don’t have two worlds, one for each slit. They’re still one world. Unless of course they got entangled with something messy, in which case that caused a divergence.
Now… once it hits the messy thing (for simplicity let’s say it’s the detector), you’re looking at a thermally large number of worlds, and the weights of these worlds is precisely given by the conservation of squared amplitude, a.k.a. the Born Rule.
I take it that it bothers you that scattering events producing a thermally large number of worlds is the norm rather than the exception? Quantum mechanics occurs in Fock space, which is unimaginably, ridiculously huge, as I’m sure you’re well aware. The wavefunction is like a gas escaping from a bottle into outer space. And the gas escapes over and over again, because each ‘outer space’ is just another a bottle to escape from by scattering.
Or is what’s bugging you that MWI is usually presented as creating less than a thermally large number of worlds? That’s a weakness of common explanations, sure. Examples may replace 10^(mole) with 2 for simplicity’s sake.
I think we are in agreement here that interacting with the detector initially creates a messy entangled object. If one believes Zurek, it then decoheres/relaxes into a superposition of eigenstates through einselection, while bleeding away all other states into the “environment”. Zurek seems to be understandably silent on whether a single eigenstate survives (collapse) or they all do (MWI).
What I was pointing out with the spontaneous emission example is that there are no discrete eigenstates there, thus all possible emission times and directions are on an equal footing. If you are OK with this being described as MWI, I have no problem with that. I have not seen it described this way, however. In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Another, unrelated issue I have not seen addressed by MWI (or objective collapse) is how in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down. Apparently these 2x2 worlds somehow turn into just 2 worlds (updown and downup), with the other two (upup and downdown) magically discarded to preserve the angular momentum conservation. But I suppose this is a discussion for another day.
In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Peculiar. That was one of the first examples I ever encountered. Not the first two, but it was one of the earlier ones. It was emphasized that there is a colossal number of ‘worlds’ coming out of this sort of event, and the two-way splits in the previous examples were just simplest-possible cases.
… in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down
How can you cut a pizza twice and get only two slices? By running the pizza cutter over the same line again. Same deal here. By applying the same test to the two entangled particles, they get the same results. Or do you mean, how can MWI keep track of the information storage aspects of quantum mechanics? Well, we live in Fock space.
That was one of the first examples I ever encountered.
I’d appreciate some links.
By applying the same test to the two entangled particles, they get the same results.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
It’s just like it works the rest of the time in quantum mechanics, because that’s all that’s going on.
(†) A quick rundown of how prominent this notion is, judging by google results for ‘many worlds’: Wikipedia seemed to ignore quantity. The second hit was HowStuffWorks, which gave an abominable (and obviously pop) treatment. Third was a NOVA interview, and that didn’t give a quantitative answer but stated that the number of worlds was mind-bogglingly large. Fourth was an entry at Plato.stanford.edu, which was quasi-technical while making me cringe about some things, and didn’t as far as I could tell touch on quantity. Fifth was a very nontechnical ‘top 10’-style article which had the huge number of worlds as entries 10, 9, and 8. The sixth and seventh hits were a movie promo and a book review. Eighth was the article I linked above, in preprint form (and so no anchor link, I had to find that somewhere else).
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
Right, the two macroscopic systems are entangled once both interact with the singlet, but this is a non-local statement which acts as a curiosity stopper, since it does not provide any local mechanism for the apparent “action at a distance”. Presumably MWI would offer something better than shut-up-and-calculate, like showing how what is seen locally as a pair of worlds at each detector propagate toward each other, interact and become just two worlds at the point where the results are compared, thanks to the original correlations present when the singlet was initially prepared. Do you know of anything like that written up anywhere?
Part 1 - to your first sentence: If you accept quantum mechanics as the one fundamental law, then state information is already nonlocal. Only interactions are local. So, the way you resolve the apparent ‘action at a distance’ isn’t to deny that it’s nonlocal, but to deny that it’s an action. To be clearer:
Some events transpire locally, that determine which (nonlocal) world you are in. What happened at that other location? Nothing.
Part 2 - Same as last link, question 32., with one exception: I would say that |me(L)> and such, being macrostates, do not represent single worlds but thermodynamically large bundles of worlds that share certain common features. I have sent an email suggesting this change (but considering the lack of edits over the last 18 years, I’m not confident that it will happen).
To summarize: just forget about MWI and use conventional quantum mechanics + macrostates. The entanglement is infectious, so each world ends up with an appropriate pair of measurements.
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
But, I found something here. (†)
Thanks! It looks like the reference equates the number of worlds with the number of microstates, since it calculates it as exp(S/k), not as the number of eigenstates of some interaction Hamiltonian, which is the standard lore. From this point of view, it is not clear how many worlds you get in, say, a single-particle Stern-Gerlach experiment: 2 or exponent of the entropy change of the detector after it’s triggered. Of course, one can say that we can coarse-grain them the usual way we construct macrostates from microstates, but then why introduce many worlds instead of simply doing quantum stat mech or even classical thermodynamics?
Anyway, I could not find this essential point (how many worlds?) in the QM sequence, but maybe I missed it. All I remember is the worlds of different “thickness”, which is sort of like coarse-graining microstates into macrostates, I suppose.
On the contrary, I’ve found that MWI is “usually presented” as continuous branching happening continuously over time and space. And (the argument goes) you can’t argue against it on the grounds of parsimony any more than you can argue against atoms or stars on the grounds of parsimony. (There are other valid criticisms, to be sure, but breaking parsimony is not one of them.)
Sure. Here’s one. LW’s own quantum physics sequence discusses systems undergoing continuously branching evolution. Even non-MWI books are fairly explicit pointing out that the wavefunction is continuous but we’ll study discrete examples to get a feel for things (IIRC).
In fact, I don’t think I’ve ever seen an MWI claim outside of scifi that postulates discrete worlds. I concede that some of the wording in layman explanations might be confusing, but even simplifications like “all worlds exist” or “all quantum possibilities are taken” implies continuous branching.
It seems to me like continuous branching is the default, not the exception. Do you have any non-fiction examples of MWI being presented as a theory with discretely branching worlds?
Precisely. It’s also not a trivial connection. The way the interaction between the wavefunction and the particles produces the Born probabilities is subtle and interesting (see MrMind’s comment below on some of the subtleties involved).
The main problem with Bohmian mechanics, from my perspective, is not that it is non-local per se (after all, the lesson of Bell’s theorem is that all interpretations of QM will be non-local in some sense), but that it’s particular brand of egregious non-locality makes it very difficult to come up with a relativistic version of the theory. I have seen some attempts at developing a Bohmian quantum field theory, but they have been pretty crude (relying on undetectable preferred foliations, for instance, which I consider anathema). I haven’t been keeping track, though, so maybe the state of play has changed.
I haven’t seen a single good argument why the eventual theory of everything should be local.
No love for the principle of relativity? It’s been real successful, and nonlocality means choosing a preferred reference frame. Even if the effects are non-observable, that implies immense contortions to jump through the hoops set by SR and GR, and reality being elegant seems to have worked so far. And sure, MWI may trample all over human uniqueness, but invoking human uniqueness didn’t lead to the great cosmological breakthroughs of the 20th century.
Yes, the feeling I have is that of uneasiness, not rejection. But still, DBB can be put in agreement with relativity only through the proper initial conditions, which I see as a defect (although not an obviously fatal one).
The De Broglie-Bohm theory is a very interesting interpretation of quantum mechanics. The highlights of the theory are:
The wavefunction is treated as being real (just as in MWI—in fact the theory is compatible with MWI in some ways),
Particles are also real, and are guided deterministically by the wavefunction. In other words, it is a hidden variable theory.
At first it might seem to be a cop-out to assume the reality of both the wavefunction and of actual point particles. However, this leads to some very interesting conclusions. For example, you don’t have to assume wavefunction collapse (as per Copenhagen) but at the same time, a single preferred Universe exists (the Universe given by the configuration of the point particles). But that’s not all.
It very neatly explains double-slit diffraction and Bell’s experiments in a purely deterministic way using hidden variables (it is thus necessarily a non-local theory). It also explains the Born probabilities (the one thing that is missing from pure MWI; Elezier has alluded to this).
Among other things, De Broglie-Bohm theory allows quantum computers but doesn’t allow quantum immortality—in this theory if you shoot yourself in the head you really will die. You won’t suddenly be yanked into an alternate Universe.
The reason I’m mentioning it is because of experiments done by Yves Couder’s group (http://math.mit.edu/~bush/?page_id=484) who have managed to build a crude and approximate physical system that incidentally illustrates some of the properties of De Broglie-Bohm theory. They use oil droplets that generate waves and the resulting waves guide the droplets. Most importantly, the droplets have ‘path memory’, so if a droplet is directed towards a double slit, it can ‘interfere’ with itself and produce nice double-slit diffraction fringes. One of their experiments that was just in the news recently illustrated particle behavior very similar to what the Schrodinger equation predicts: http://math.mit.edu/~bush/?p=2679
Now, De Broglie-Bohm theory does not seem to be one of the more popular interpretations of QM, because of its non-locality (this doesn’t produce causal paradoxes like the Grandfather paradox, though, despite what some might say). However, in my opinion this is very unfair. Locality is just a relic from classical physics. I haven’t seen a single good argument why the eventual theory of everything should be local.
If you ascribe to MWI, locality is a reason to abandon De Broglie-Bohm theory, but a relatively minor one—instead, it’s the way it insists on neglecting the reality of the guide wave.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
If you take the guide wave to be the rules of the universe (a tack I’ve heard) then the rules of the universe contain civilizations—literally, not as hypothetical implications. Choosing to use timeless physics (the response I got) doesn’t change this.
The particle position recovers the Born probabilities. (It even does so deterministically, unlike Objective Collapse theories.) The wave function encodes lots of information, but it’s the particle that moves our measuring device, and the measuring device that moves our brains. If we succeed in simplifying our theory only by giving up on saving the phenomenon, then our theory is too simple.
But once you decide you’re going to interpret the wave function as distributing probability among some set of orthogonal subspaces, you’re already compelled into the Born probabilities.
All you need to decide that you ought to do that is the general conclusion that the wavefunction represents some kind of reality-fluid. Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
But the phrase “reality fluid” is just a place-holder. It’s a black box labeled “whatever solves this here problem”. What we see is something particle-like, and it’s the dynamics relating our observations over time that complicates the story. As Schrödinger put it:
One option is to try to find the simplest theory that explains away the particle-like appearance anthropically, which will get you an Everett-style (‘Many Worlds’-like) interpretation. Another option is to take the sudden intrusion of the Born probabilities as a brute law of nature, which will get you a von-Neumann-style (‘Collapse’-like) interpretation. The third option is to accept the particle-like appearance as real, but theorize that a more unitary underlying theory relates the Schrödinger dynamics to the observed particle, which will get you a de-Boglie-style (‘Hidden Variables’) interpretation. You’ll find Bohmian Mechanics more satisfying than Many Worlds inasmuch as you find MW’s anthropics hand-wavey or underspecified; and you’ll find BM more satisfying than Collapse inasmuch as you think Nature’s Laws are relatively simple, continuous, scalable, and non-anthropocentric.
If BM just said, ‘Well, the particle’s got to be real somehow, and the Born probabilities have to emerge from its interaction with a guiding wave somehow, but we don’t know how that works yet’, then its problems would be the same as MW’s. But BM can formally specify how “reality fluid” works, and in a less ad-hoc way than its rivals. So BM wins on that count.
Where it loses is in ditching locality and Special Relativity, which is a big cost. (It’s also kind of ugly and complicated, but it’s hard to count that against BM until we’ve seen a simpler theory that’s equally fleshed out re the Measurement Problem.)
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation? If not, then where are physicists getting them from, since it’s not the QM dynamics?
I wouldn’t call Everett ‘Anthropic’ per se. I consider it an application of the Generalized Anti-Zombie Principle: Here you’ve got this structure that acts like it’s sapient†. Therefore, it is.
As for BM formally specifying how the reality fluid works… need I point out this this is 100% entirely backwards, being made of burdensome details?
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please. Already we can run many worlds side-by-side. The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
Edited to add †: by ‘acts like’ I mean ‘has the causal structure for it to be’
But many of the more-general lagrangians of particle physics are non-linear, in general there should be higher order, non-linear corrections. So Schrödinger is a single-particle/linearized approximation. What does this do for your view of many worlds? When we try to extend many worlds naively to QFTs we run into all sorts of weird problems (much of the universal wavefunction’s amplitude doesn’t have well defined particle number,etc). Shouldn’t we expect the ‘proper’ interpretation to generalize nicely to the full QFT framework?
Or rather, the proper interpretation should work in the full QFT framework, and may or may not work for ordinary QM.
What are you talking about? I’ve only taken one course in quantum field theory, but I’ve never heard of anything where quantum mechanics was not linear. Can you give me a citation? It seems to me that failure of linearity would either be irrelevant (superlinear case, low amplitudes) or so dominant that any linearity would be utterly irrelevant and the Born Probabilities wouldn’t even be a good approximation.
Also, by ‘the Schrodinger equation’ I didn’t mean the special form which is the fixed-particle Hamiltonian with pp/2m kinetic energy—I meant the general form -
i hbar (d/dt) Psi = Hamiltonian Psi
Note that the Dirac Equation is a special case of this general form of the Schrodinger Equation. MWI, ‘naive’ or not, has no trouble with variations in particle number.
I’m not sure what you mean by ‘anthropic per se’. Everett (MW) explains apparent quantum indeterminism anthropically, via indexical ignorance; our knowledge of the system as a whole is complete, but we don’t know where we in the system are at this moment. De Broglie (HV) explains apparent quantum indeterminism via factual ignorance; our knowledge of the system’s physical makeup is incomplete, and that alone creates the appearance of randomness. Von Neumann (OC) explains apparent quantum indeterminism realistically; the world just is indeterministic.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics. The difficulty of extracting the one from the other, of turning this into a single unified and predictive theory, just is the ‘Measurement’ Problem. Bohm is taking two distinct models and reifying mechanisms for each to produce an all-encompassing theory; maybe that’s useless or premature, but it’s clearly not a non sequitur, because the evidence for a genuine wave/particle dichotomy just is the evidence that makes scientists allow probabilistic digressions from the Schrödinger equation.
MW is not a finished theory until we see how it actually unifies the two, though I agree there are at least interesting and suggestive first steps in that direction. BM’s costs are obvious and clear and formalized, which is its main virtue. Our ability to compare those costs to other theories’ is limited so long as it’s the only finished product under evaluation, because it’s easy to look simple when you choose to only try to explain some of the data.
I see what you mean now about anthropism. Yes, ignorance is subjective. Incidentally, this is how it used to be back before quantum ever came up.
Historically, Born was way before Everett and even longer before decoherence, so that’s not exactly a shocker. Even in Born’s time it was understood that subspaces had only one way of adding up to 1 in a way that respects probability identities—I’d bet dollars to donuts that that was how he got the rule in the first place, rather than doing a freaking curve fit to experimental data. What was missing at the time was any way to figure out what the wavefunction was, between doing its wavefunctiony thing and collapse.
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI, requires much less bullet-biting than before it was introduced. There is still some, but those bullets are much chewier for me than any alternate rules of dynamics.
(incidentally, IIRC, Shminux, you hold the above quote but not MWI, which I find utterly baffling—if you want to explain the difference or correct me on your position, go ahead)
Good thing I never said it was.
Well, you still need a host of ideas about how to actually interpret a diagonal density matrix. Because you don’t have Born probabilities as a postulate, you have this structure but no method for connecting it back to lab-measured values.
While it seems straightforward, its because many-world’s advocates are doing slight of hand. They use probabilities to build a theory (because lab experiments appear to be only describable probabilistically), and later they kick away that ladder but they want to keep all the structure that comes with it (density matrices,etc).
I know of many good expositions that start with the probabilities and use that to develop the form of the Schroedinger equation from Galilean relativity and cluster decomposition (Ballentine, parts of Weinberg).
I don’t know any good expositions that go the other way. There are reasons that Deutsch, Wallace,etc have spent so much time trying to develop Born probabilities in a many world’s context- because its an important problem.
Hold on a moment. What ladder is being kicked away here?
We’ve got observed probabilities. They’re the experimental results, the basis of the theory. The theory then explains this in terms of indexical ignorance (thanks, RobbBB). I don’t see a kicked ladder. Not every observed phenomenon needs a special law of nature to make it so.
Instead of specially postulating the Born Probabilities, elevating them to the status of a law of nature, we use it to label parts of the universe in much the same way as we notice, say, hydrogen or iron atoms - ‘oh, look, there’s that thing again’. In this case, it’s the way that sometimes, components of the wavefunction propagate such that different segments won’t be interfering with each other coherently (or in any sane basis, at all).
Also, about density matrices—what’s the problem? We’re still allowed to not know things and have subjective probabilities, even in MWI. Nothing in it suggests otherwise.
That’s just regurgitating the teacher’s password. MWI does not even account for the radioactive decay. In other words, if you find the Schrodinger’s cat dead, how long has it been dead for?
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
As for radioactive decay, I’m not clear what you require of MWI here. The un-decayed state has amplitude which gradually diminishes, leaking into other states. When you look in a cat box, you become entangled with it.
If the states resulting from death at different times are distinguishable, then you can go ahead and distinguish them, and there’s your answer (or, if it could be done in principle but we’re not clever enough, then the answer is ‘I don’t know’, but for reasons that don’t really have bearing on the question).
Where it really gets interesting is if the states resulting from cat-death are quantum-identical. Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?‘. The answer is either ‘mu’, or ‘all of them’, depending on your taste in rejecting questions. The final result is the weighted sum of all of the possible times of death, and no one of them is correct.
Note that for this identical case to apply, nothing inside the box gets to be able to tell the time (see note), which pretty much rules out its being an actual cat.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
~~
Note: The issue is that of cramming time-differentiating states into one final state. The only way you can remove information like that is to put it somewhere else. If you have a common state that the cat falls into from a variety of others, then the radiation from the cat’s decays into this common state encodes this information. It will be lost to entropy, but that just falls under the aegis of ‘we’re not clever enough to get it back out’ again, and isn’t philosophically interesting.
Yeah, sorry, that was uncalled for.
Right. And each of those uncountably many (well, finitely many for a finite cutoff or countably many for a finite box) states corresponds to a different time of death (modulo states with have the same time of death but different emitted particle momenta).
Yes, with all of those states.
They must be, since they result in different macroscopic effects (from the forensic time-of-death measurement).
Yes, but in this case they are not.
Not at all. In the diffraction experiment you don’t distinguish between different paths, you sum over them.
No, you measure the time pretty accurately, so wrong-tme states do not contribute.
Not quite. If the cat does not interact with the rest of the world, the cat is a superposition of all possible decay states. (I am avoiding the objective collapse models here.) It’s pretty actual, except for having to be at near 0 K to avoid leaking information about its states via thermal radiation.
Yes it will. But a different time in different “worlds”. Way too many of them.
The first few responses here boil down to the last response:
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Well, yes, because it’s a freaking cat. I had already dealt with the realistic case and was attempting to do something with the other one by explicitly invoking the premise even if it is absurd. The following pair of quote-responses (responding to the lines with ‘diffraction’ and ‘sum of all the possible’) was utterly unnecessary because they were in a conditional ‘if A then B’, and you had denied A.
Of course, one could decline to use a cat and substitute a system which can maintain coherence, in which case the premise is not at all absurd. This was rather what I was getting at, but I’d hoped that your ability to sphere the cow was strong enough to give a cat coherence.
Well, if you are OK with the world branching infinitely many ways every infinitesimally small time interval in every infinitesimally small volume of space, then I guess you can count it as “the disposition”. This is not, however, the way MWI is usually presented.
Spacetime is not saturated with decoherence events.
Inference gap.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction, then MWI says that all those components you were declining to collapse are still in the same world.
So, since you don’t go around collapsing the wavefunction into infinite variety of outcomes at every event of spacetime, MWI doesn’t call for that much branching.
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
For example, suppose we take a freshly prepared atom in an excited state (it is simpler than radioactive decay). QFT says that its state evolves into a state in the Fock space which is a
ground states of the atom+excited states of the EM vacuum (a photon).
I mean “+” here loosely, to denote that it’s a linear combination of the product states with different momenta. The phase space of the photon includes all possible directions of momentum as well as anything else not constrained by the conservation laws. The original excited state of the atom is still there, as well as the original ground state of the EM field, but it’s basically lost in the phase space of all possible states.
Suppose there is also a detector surrounding the atom, which is sensitive to this photon (we’ll include the observer looking at the detector in the detector to avoid the Wigner’s friend discussion). Once the excitation of the field propagates far enough to reach the detector, the total state is evolved into
ground states of the atom + excited states of the detector.
So now the wave function of the original microscopic quantum system has “collapsed”, as far as the detector is concerned. (“decohered” is a better term, with less ontological baggage). I hope this is pretty uncontroversial, except maybe to a Bohmian, to Penrose, or to a proponent of objective collapse, but that’s a separate discussion.
So now we have at least as many worlds/branches as there were states in the Fock space. Some will differ by detection time, others by the photon direction, etc. The only thing limiting the number of branches are various cutoffs, like the detector size.
Am I missing anything here?
That’s right, but it doesn’t add up to what you said about spacetime being saturated with ‘world-branching’ events.
While the decay wave is propagating, for instance, nothing’s decohering. It’s only when it reaches the critically unstable system of the detector that that happens.
There is no single moment like that. if the distance from the atom to the detector is r and we prepare the atom at time 0, the interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on.
Depends on your framework, but it will actually start even earlier than that in a general QFT. The expectation will be non-zero for all times t. I suppose the physical interpretation is something like a local-fluctuation trips the detector.
Of course, commutators will be non-zero as locality requires.
Right, good point. Still, there are rarely just a few distinct branches in almost any measurement process, it’s a continuum of states, isn’t it?
I see that my short, simple answer didn’t really explain this, so I’ll try the longer version.
Under a collapse interpretation, when is it OK to collapse things and treat them probabilistically? When the quantum phenomena have become entangled with something with enough degrees of freedom that you’re never going to get coherent superposition back out (it’s decohered) (if you do it earlier than this, you lose the coherent superpositions and you get two one-slit patterns added to each other and that’s all wrong)
This is also the same criterion for when you consider worlds to diverge in MWI. Therefore, in a two-slit experiment you don’t have two worlds, one for each slit. They’re still one world. Unless of course they got entangled with something messy, in which case that caused a divergence.
Now… once it hits the messy thing (for simplicity let’s say it’s the detector), you’re looking at a thermally large number of worlds, and the weights of these worlds is precisely given by the conservation of squared amplitude, a.k.a. the Born Rule.
I take it that it bothers you that scattering events producing a thermally large number of worlds is the norm rather than the exception? Quantum mechanics occurs in Fock space, which is unimaginably, ridiculously huge, as I’m sure you’re well aware. The wavefunction is like a gas escaping from a bottle into outer space. And the gas escapes over and over again, because each ‘outer space’ is just another a bottle to escape from by scattering.
Or is what’s bugging you that MWI is usually presented as creating less than a thermally large number of worlds? That’s a weakness of common explanations, sure. Examples may replace 10^(mole) with 2 for simplicity’s sake.
I think we are in agreement here that interacting with the detector initially creates a messy entangled object. If one believes Zurek, it then decoheres/relaxes into a superposition of eigenstates through einselection, while bleeding away all other states into the “environment”. Zurek seems to be understandably silent on whether a single eigenstate survives (collapse) or they all do (MWI).
What I was pointing out with the spontaneous emission example is that there are no discrete eigenstates there, thus all possible emission times and directions are on an equal footing. If you are OK with this being described as MWI, I have no problem with that. I have not seen it described this way, however. In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Another, unrelated issue I have not seen addressed by MWI (or objective collapse) is how in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down. Apparently these 2x2 worlds somehow turn into just 2 worlds (updown and downup), with the other two (upup and downdown) magically discarded to preserve the angular momentum conservation. But I suppose this is a discussion for another day.
Peculiar. That was one of the first examples I ever encountered. Not the first two, but it was one of the earlier ones. It was emphasized that there is a colossal number of ‘worlds’ coming out of this sort of event, and the two-way splits in the previous examples were just simplest-possible cases.
How can you cut a pizza twice and get only two slices? By running the pizza cutter over the same line again. Same deal here. By applying the same test to the two entangled particles, they get the same results. Or do you mean, how can MWI keep track of the information storage aspects of quantum mechanics? Well, we live in Fock space.
I’d appreciate some links.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
But, I found something here. (†)
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
It’s just like it works the rest of the time in quantum mechanics, because that’s all that’s going on.
(†) A quick rundown of how prominent this notion is, judging by google results for ‘many worlds’: Wikipedia seemed to ignore quantity. The second hit was HowStuffWorks, which gave an abominable (and obviously pop) treatment. Third was a NOVA interview, and that didn’t give a quantitative answer but stated that the number of worlds was mind-bogglingly large. Fourth was an entry at Plato.stanford.edu, which was quasi-technical while making me cringe about some things, and didn’t as far as I could tell touch on quantity. Fifth was a very nontechnical ‘top 10’-style article which had the huge number of worlds as entries 10, 9, and 8. The sixth and seventh hits were a movie promo and a book review. Eighth was the article I linked above, in preprint form (and so no anchor link, I had to find that somewhere else).
Right, the two macroscopic systems are entangled once both interact with the singlet, but this is a non-local statement which acts as a curiosity stopper, since it does not provide any local mechanism for the apparent “action at a distance”. Presumably MWI would offer something better than shut-up-and-calculate, like showing how what is seen locally as a pair of worlds at each detector propagate toward each other, interact and become just two worlds at the point where the results are compared, thanks to the original correlations present when the singlet was initially prepared. Do you know of anything like that written up anywhere?
Part 1 - to your first sentence: If you accept quantum mechanics as the one fundamental law, then state information is already nonlocal. Only interactions are local. So, the way you resolve the apparent ‘action at a distance’ isn’t to deny that it’s nonlocal, but to deny that it’s an action. To be clearer:
Some events transpire locally, that determine which (nonlocal) world you are in. What happened at that other location? Nothing.
Part 2 - Same as last link, question 32., with one exception: I would say that |me(L)> and such, being macrostates, do not represent single worlds but thermodynamically large bundles of worlds that share certain common features. I have sent an email suggesting this change (but considering the lack of edits over the last 18 years, I’m not confident that it will happen).
To summarize: just forget about MWI and use conventional quantum mechanics + macrostates. The entanglement is infectious, so each world ends up with an appropriate pair of measurements.
Thanks! It looks like the reference equates the number of worlds with the number of microstates, since it calculates it as exp(S/k), not as the number of eigenstates of some interaction Hamiltonian, which is the standard lore. From this point of view, it is not clear how many worlds you get in, say, a single-particle Stern-Gerlach experiment: 2 or exponent of the entropy change of the detector after it’s triggered. Of course, one can say that we can coarse-grain them the usual way we construct macrostates from microstates, but then why introduce many worlds instead of simply doing quantum stat mech or even classical thermodynamics?
Anyway, I could not find this essential point (how many worlds?) in the QM sequence, but maybe I missed it. All I remember is the worlds of different “thickness”, which is sort of like coarse-graining microstates into macrostates, I suppose.
It is coarse-graining them into macrostates. Each macrostate is a bundle of a thermodynamically numerous effectively-mutually-independent worlds.
On the contrary, I’ve found that MWI is “usually presented” as continuous branching happening continuously over time and space. And (the argument goes) you can’t argue against it on the grounds of parsimony any more than you can argue against atoms or stars on the grounds of parsimony. (There are other valid criticisms, to be sure, but breaking parsimony is not one of them.)
Any links?
Indeed, the underlying equations are the same whether you aesthetically prefer MWI or not.
Sure. Here’s one. LW’s own quantum physics sequence discusses systems undergoing continuously branching evolution. Even non-MWI books are fairly explicit pointing out that the wavefunction is continuous but we’ll study discrete examples to get a feel for things (IIRC).
In fact, I don’t think I’ve ever seen an MWI claim outside of scifi that postulates discrete worlds. I concede that some of the wording in layman explanations might be confusing, but even simplifications like “all worlds exist” or “all quantum possibilities are taken” implies continuous branching.
It seems to me like continuous branching is the default, not the exception. Do you have any non-fiction examples of MWI being presented as a theory with discretely branching worlds?
Precisely. It’s also not a trivial connection. The way the interaction between the wavefunction and the particles produces the Born probabilities is subtle and interesting (see MrMind’s comment below on some of the subtleties involved).
The main problem with Bohmian mechanics, from my perspective, is not that it is non-local per se (after all, the lesson of Bell’s theorem is that all interpretations of QM will be non-local in some sense), but that it’s particular brand of egregious non-locality makes it very difficult to come up with a relativistic version of the theory. I have seen some attempts at developing a Bohmian quantum field theory, but they have been pretty crude (relying on undetectable preferred foliations, for instance, which I consider anathema). I haven’t been keeping track, though, so maybe the state of play has changed.
Interesting; I did a quick google search and apparently there’s a guy who claims he can do it without foliations: iopscience.iop.org/1742-6596/67/1/012035/pdf/jpconf7_67_012035.pdf
I lack the expertise to make a more detailed analysis of it though.
No love for the principle of relativity? It’s been real successful, and nonlocality means choosing a preferred reference frame. Even if the effects are non-observable, that implies immense contortions to jump through the hoops set by SR and GR, and reality being elegant seems to have worked so far. And sure, MWI may trample all over human uniqueness, but invoking human uniqueness didn’t lead to the great cosmological breakthroughs of the 20th century.
The things that bugs me with DBB theory is that it allows superluminal comunication when the guide wave is out of equilibrium...
But since it’s superdeterministic, it seems unlikely that you could actually set up an artifical nonequilibrium situation.
Yes, the feeling I have is that of uneasiness, not rejection. But still, DBB can be put in agreement with relativity only through the proper initial conditions, which I see as a defect (although not an obviously fatal one).