Demystifying Born’s rule
Under the many-worlds interpretation (MWI) of quantum physics, born probabilities are a bit mysterious and hard to deal with. How do you do Solomonoff induction when you keep splitting? There are ways to do this, but they end up equivalent to another interpretation of quantum mechanics that is much simpler than all that: the pilot wave theory. Not only does it include an interpretation of Born probabilities, but it becomes a consequence of statistical mechanics. We also do not need to adapt our epistemology to QM anymore, because the pilot wave theory is deterministic. All our uncertainty comes from initial conditions, just like in Newtonian physics. Much of the weirdness of QM just reduces to the weirdness of anthropics in general. The pilot wave theory is an interpretation, meaning choosing it is a philosophical choice instead of a scientific one.
The ontology includes two components: a wave function (like in many-worlds) and a certain configuration of particles. The configuration is called the “actual” configuration (although the wave function and this configuration are both necessary components). The wave function follows Schrödinger’s equation and the configuration follows the guiding equation.
So far, this might seem more complicated than MWI. MWI already has the wave function (it is the same wave function ins MWI and pilot wave), why bother with the “actual configuration”? But MWI also needs to postulate Born probabilities, and if you want to actually do inference you need to figure out how this interacts with embedded agency (good luck trying to figure out how embedded agency applies to Wigner’s friend in MWI). To actually interpret MWI, you end up needing to select a configuration to feed into your inference procedure, and this will be roughly as complicated as the pilot wave theory anyways, except you still needed to assume Born’s rule as an axiom. In the pilot wave theory, you just select observations from the “actual” configuration.
So how does Born’s rule get demystified? Well, given a small amount of uncertainty in the initial conditions, the ensemble quickly converges to be such that the configuration of particles is distributed according to Born’s rule applied to the wave function. This is called relaxation to equilibrium. So Born’s rule is a law in the same sense as the second law of thermodynamics. Also see the SEP entry.
Some other points:
Quantum immorality? Depends if you consider the copies of yourself in the wave function alive (biologically no but if you consider things isomorphic to alive things to be alive than yes). And if you’re a Platonist you still get hypothetical immortality, where you get to be alive in hypothetical scenarios. You can also consider multiple “actual configurations” that are compatible with the same wave function.
Collapse? If a subsystem obeys Born’s rule and does not interact with the environment, it will act like its own mini-universe. But if the environment observes information about the configuration of particles in the subsystem, we can no longer do this, requiring us to view it as part of the rest of the universe again. This is called decoherence, which when sharp enough is what we perceive as collapse (which is never perfect). For example, observing the photon in the double-slit experiment results in macroscopic differences in the wave function that prevents it from interfering with itself, which in turn means that the actual configuration of the photon follows a different trajectory. The part of the wave function that does not matches the actual wave function macroscopically has a negligible effect on the trajectory.
Although we have no way of testing this currently, there is a way that the pilot wave theory could diverge from QM empirically. If we could somehow learn more info about the configuration of particles than Born’s rule allows, this would have measurable effects. Or from the perspective of the pilot wave theory, Born’s rule is itself empirical (again in the same sense as the second law of thermodynamics).
Why isn’t it more popular? Well for physicists, the Copenhagen interpretation or MWI (depending on context) suffice. Copenhagen interpretation works well enough for experiments. MWI works fine if you are describing the universe as a whole (and do not care how your observations fit into it). Only if you need embedded agency does pilot wave theory become simpler than MWI. Physicists also do not like that pilot wave theory is non-local.
Is the pilot wave theory the best interpretation? I do not know. Randomly sampling from the ensemble in the ensemble interpretation also looks promising. Perhaps QBism as well? (See this table for other options.) The point of this post is to argue that pilot wave theory is sufficient to demystify born’s rule; I am not ruling out that there are even better interpretations. In particular, the fact that the two most popular interpretations can’t demystify Born’s rule is specific to those interpretations, not to QM.
- 14 Jun 2023 3:26 UTC; -1 points) 's comment on I’m still mystified by the Born rule by (
A more severe problem is that it is not relativistic. More precisely, as John Bell said of a stochastic version, “As with relativity before Einstein, there is a preferred frame in the formulation of the theory, but it is experimentally indistinguishable”. You need some notion of absolute simultaneity, in order to write down the equations of motion.
Bohmian mechanics (to use another name for this dynamical framework) has analogous problems with some other symmetries. For an example concerning “lapse” and “shift” functions in general relativity, see this old paper. The situation is at least as bad in gauge theories, such as those that describe the strong and electroweak forces.
There is no ontological interpretation of quantum theory, known to me, that is free of problems. The simplest attitude to have is that quantum mechanics works empirically, that it is ontologically incomplete, and that we don’t know the true ontology.
There are extensions to relativity, just like the other interpretations.
It would be tiresome to go through all the proposals listed in that article, identifying how they work, the resultant limitations, and then speculating whether they might nonetheless help us understand reality one day. Or at least, I think I would save that kind of analytical effort, for an audience of physicists interested in ontology.
But let me just discuss one example. The proposal discussed in the most detail, is due to Dürr et al. The article straight out says that they rely on the existence of a “preferred foliation” of spacetime. That means that to define their “relativistic” Bohmian mechanics, they still need a particular decomposition of spacetime into a stack of spacelike hypersurfaces. Their trick is to then say, well, we don’t need to use a coordinate system in which those hypersurfaces are all “t = constant”. We can do a relativistic boost, and switch to a new coordinate system with a new time coordinate, in which those hypersurfaces are tilted with respect to the time axis.
That’s formally true, but nonetheless, their pilot wave and their equations of motion are only defined with respect to one specific foliation, which de facto defines a notion of absolute simultaneity.
The point is that all these “extensions to relativity” involve some kind of trick, or they only work in an artificially narrow setting, or they meander off in some eccentric direction. There is certainly no relativisitc Bohmian mechanics known, that can deal with all the known phenomena of field theory, like pair creation and gauge invariance.
I have a background in physics, and I don’t like pilot wave theory, because the particle configuration is completely epi-phenomenal. And by the way, I also don’t like Copenhagen interpretation, because it’s not even a theory.
Also, last I heard, they had not figured out how to multiple particles, let alone field theory. But that was almost a decade ago, so there has probably been some progress.
Regarding explaining the Born’s rule. You have a point that many words leave something to be explained here. On the other hand there is no other alternative. There is no other choice that preserves probability over time.
On the other hand, pragmatically speaking, pilot wave theory does give the same predictions as other QM interpretation. So it’s probably fine use this interpretation if it simplifies other things.
Copenhagen interpretation isn’t a theory-as-opposed-to-an-interpretation...and doesn’t claim to be. Although you could complain it isn’t saying much as an interpretation either.
Bohmian mechanics has the opposite problem: it’s definitely a theory , because it has additional mathematical structure, and it definitely has an ontology. But for all the additional complexity, it struggles to predict the full set of results. It doesnt reproduce everything that standard QM can do, and it isn’t simpler, so there is no pragmatic case for using it. But there is such a pragmatic case for using CI, interpreted correctly as the minimum set of assumptions necessary to get the results, not incorrectly as a synonym for objective collapse.
Which probability? MWI preserves objective probability, but but MWIers still.need to disregard unobserved measurements in order to get the right subjective probabilities.
I admit that I did not word that very well. Honestly don’t know how to concisely express how much Copenhagen interpretation makes no sese at all, not even as an interpretation.
You could express it non-concisely.
Not with out spending more time than I want on this. Sorry.
IMO the awkwardness of introducing an unnecessary new element to the theory’s ontology, plus the fact that this new element interacts non-locally with the wave function, means that MWI is still simpler/more philosophically satisfying overall. Also not convinced that pilot-wave actually makes embedded agency any simpler.
There is a beautiful way to simulate pilot wave behavior experimentally with “walking” oil drops.
It is possible to derive Bor’s rule from very general conditions that would apply to almost any version of QM, Eg.
https://arxiv.org/pdf/2006.14175.pdf
(NB, that’s Born’s rule, not measurement, collapse, decoherence, etc).
Last I heard, there was no pilot-wave version of the standard model of particle physics. Also, last I heard, the (apparent) exact local Lorentz-invariance of the universe is either outright violated by pilot-wave theories or “put in by hand” in a way that makes it seem like a massive coincidence / fine-tuning.
I’m actually not very knowledgeable on this; if those allegations above were ever true, are they still true right now?
Separately, I disagree that MWI creates mysteries about embedded agency or anything else. You do need an “indexical” postulate of some sort (the probability that “I will find myself” in such-and-such branch), and the born rule supplies that, but I don’t see that as hard to swallow. Also, I believe the born rule turns out to be equivalent to seemingly-weaker indexical assumptions like “as quantum amplitude approaches zero, the probability that you’ll find yourself in that branch approaches zero too” (cf. here). I don’t think we can get rid of indexical assumptions—even in a deterministic universe, we still have to deal with Parfit’s teletransporter and such. If we’re OK with Parfit’s teletransporter, I don’t think there’s additional weirdness in the MWI indexical assumption. (I’m stating these opinions without justifying them.)
Born’s rule alone doesn’t suffice I don’t think? What happens if the branch “you” are in gets cancelled with another branch? It’s not clear to me how you’re supposed to do inference with just Born’s rule. See also: https://www.lesswrong.com/posts/7A9rsJFLFqjpuxFy5/i-m-still-mystified-by-the-born-rule#Q1__What_hypothesis_is_QM_
One doesn’t invoke the term “different branches” unless they are macroscopically different, and if they’re ever macroscopically different, then they will remain macroscopically different forever, thanks to entropy (and the related fact that macroscopic events leave countless little persistent traces in the environment). Even moreso if we’re talking about human observers, who form memories of what they’ve seen in the form of changes to the structure of their brains. Macroscopically different branches can’t “cancel” and more generally macroscopically different branches can’t interfere in a way that has any measurable effect.
(For any quantum observable O that’s relevant in practice, ⟨ψ|O|ϕ⟩≈0 if |ψ⟩ and |ϕ⟩ are macroscopically different—e.g. the geiger counter loudly clicked in |ψ⟩ but not |ϕ⟩.)
Solomonoff inductors are a bit of an odd case because they don’t & can’t exist in the universe. But leaving that aside (let’s say we’re implementing AIXItl or whatever):
Every time the computer makes an observation, we learn some stuff about the universe, and we also learn some indexical information about where we-in-particular are sitting within the universe. This has always been true, but it’s especially true about MWI because we will never stop getting indexical updates (unlike a deterministic universe where you can learn that you’re in a particular room on earth and then there’s no more indexical information to learn). In MWI, if we observe that a pixel is bright, then we have learned that we are in a branch of the wavefunction wherein the pixel is bright. There might or might not be other branches wherein the pixel is dark, but if there are, we now know that those branches are “not where I have found myself”, and we can ignore those branches accordingly. You can still have hypotheses, but they will incorporate born-rule indexical uncertainty about which branch I will find myself in in the future, on top of whatever other indexical uncertainty you have for other reasons.
Ah, but that’s the crux of the issue. They can. How should Wigner’s friend be performing inference?
Scott’s analysis seems fine to me, unless I missed something. He writes “Many-Worlders will yawn at this question” [in reference to Wigner’s friend]. Yes. I yawn. If Wigner is right outside the lab door, then Wigner is in fact in one of the branches (the same branch as his friend) even if Wigner happens to not yet know which one of them. If Wigner is on Alpha Centauri, then he is not yet in one of those two branches, and his friend is, and I don’t see any problem with that. And then a few years later he gets a message from his friend, and by that point Wigner is in one of those two branches, and when he reads the message he’ll know which one.
I’m reluctant to engage with extraordinarily contrived scenarios in which magical 2nd-law-of-thermodynamics-violating contraptions cause “branches” to interfere. But if we are going to engage with those scenarios anyway, then we should never have referred to them as “branches” in the first place, and also we should be extremely wary of applying normal intuitions in situations where the magical contraption is “scrambling people’s brains” as Scott puts it.
As a meta point, I might drop out of this conversation at any point (including maybe right now), gotta get back to work. :)
Agreed. Roland Omnes tries to calculate how big the measurement apparatus of Wigner needs to be in order to measure his friend and gets 10 to the power of 10E18 degrees of freedom (“The Interpretation of Quantum Mechanics”, section 7.8).
Well, that’s one of the problems of the MWI: how do we know when we should speak of branches? Decoherence works very well for all practical purposes but it is a continuous process so there isn’t a point in time where a single branch actually splits into two. How can we claim ontology here?
I don’t think it’s a problem—see discussion here & maybe also this one.
Thanks, I see we already had a similar argument in the past.
I think there’s a bit of motte and bailey going on with the MWI. The controversy and philosophical questions are about multiple branches / worlds / versions of persons being ontological units. When we try to make things rigorous, only the wave function of the universe remains as a coherent ontological concept. But if we don’t have a clear way from the latter to the former, we can’t really say clear things about the parts which are philosophically interesting.
So much the worse for the controversy and philosophical questions. If anything, the name is the problem. People get wrong ideas from it, and so I prefer to talk in terms of decoherence rather than “many worlds”. There’s only one world, it’s just more complex than it appears and decoherence gives part of an explanation for why it appears simpler than it is.
Unfortunately, what I would call the bailey is quite common on Lesswrong. It doesn’t take much digging to find quotes like this in the Sequences and beyond:
That can only happen if the “branches” or “worlds” are still in a coherent superposition.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. The later decoherence based approach, is more emprically adequate, but seems to require additional structure, placing it’s simplicity in doubt
Coherent superpositions probably exist, but their components aren’t worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching, as opposed to decoherence.
if i understand it correctly (i may not!), scott aaronson argues that hidden variable theories (such as bohmian / pilot wave) imply hypercomputation (which should count as an evidence against them): https://www.scottaaronson.com/papers/npcomplete.pdf
If hypercomputation is defined as computing the uncomputable, then that’s not his idea. It’s just a quantum speedup better than the usual quantum speedup (defining a quantum complexity class DQP that is a little bigger than BQP). Also, Scott’s Bohmian speedup requires access to what the hidden variables were doing at arbitrary times. But in Bohmian mechanics, measuring an observable perturbs complementary observables (i.e. observables that are in some kind of “uncertainty relation” to the first) in exactly the same way as in ordinary quantum mechanics.
There is a way (in both Bohmian mechanics and standard quantum mechanics) to get at this kind of trajectory information, without overly perturbing the system evolution—“weak measurements”. But weak measurements only provide weak information about the measured observable—that’s the price of not violating the uncertainty principle. A weak measuring device is correlated with the physical property it is measuring, but only weakly.
I mention this because someone ought to see how it affects Scott’s Bohmian speedup, if you get the history information using weak measurements. (Also because weak measurements may have an obscure yet fundamental relationship to Bohmian mechanics.) Is the resulting complexity class DQP, BQP, P, something else? I do not know.
Let x and y denote configurations. Let f(x) be a number (that depends on x). Its expectation is 𝔼ₓf(x). Let g(x,y) be a number. If x and y are independent, the expectation is 𝔼ₓ𝔼ᵧg(x,y). I suspect that the mysterious ² is analogous to the one that falls out here when we condition on x=y or the other terms cancel out.