Like another Dave almost three years ago, I think this post was the most effective so far. Not as in ‘constructed better’, because I suspect that almost everything in previous posts in the QM series and quite a lot in posts elsewhere was building up to this.
I’d been getting used to thinking in terms of sensors being entangled with the particles they sense etc. but references to humans being entangled too seemed to be somewhere between obvious and avoiding the issue: I didn’t feel what that meant. In this thread I’d got to the bottom and was wondering why we were talking about a physically infinite universe when the message of halfway through internalised.
Whether I’ll be persuaded of interpretations on QM is unclear, as I have little maths and less physics so I feel hideously under-qualified to judge based on one side of the argument given that it’s perfectly plausible that the counter-argument relies on tools that I don’t have available. But in terms of the aim of making QM seem reasonable, and non-mysterious this is doing astonishingly well. Given that at a certain level I found the mysteriousness quite reassuring, that’s a particularly tough job.
Let me join all those observing that these are great explanations of QM. But I don’t get why we need to invoke MWI and the Ebborians. If the wavefunction evolves into
then it makes far more sense to me that, rather than conjuring up a completely unobservable universe with clones of ourselves where (Human-RIGHT Sensor-RIGHT Atom-RIGHT) happened, a far more empirical explanation is that it simply didn’t happen. Half of the wavefunction disappears, nondeterministically. Why, as Occam might say, multiply trees beyond necessity? Prune them instead. Multiple “worlds” strike me as no more necessary than the aether or absolute space.
(Also, the tag doesn’t work because Less Wrong uses Markdown formatting for comments; if you click “Help” under the comment box you can see a reference to some of the more common constructions.)
I think the obvious reply here is ‘keep reading to the end of the Sequence’! After all, quite a lot of space is devoted to looking at different models.
On the Occcam’s razor point, the question is what we’re endeavouring to make simple in our theories. Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit. It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!
The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real? The post addressing that question in a context divorced from QM:
http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
Basically, what the MWI believer wants to argue is that in a hypothetical universe where we had no hidden variables and no collapses—nothing but unitary evolution under the Schrödinger equation—observers would still have experiences where it ‘seems as if’ there is only one universe, and it ‘seems as if’ their measurement outcomes are probabilistic as described by the Born rule.
On the other hand, the MWI skeptic denies that the formal description of the theory suffices to determine “how things would seem if it were true” without extra mathematical machinery.
Unfortunately, the extra machinery that people propose, in order to bridge the gap between theory and observation, tends to be some combination of complicated, arbitrary-seeming, ugly and inadequate (e.g. the Many Minds of Albert and Loewer, or the various attempts to reduce Born probabilities to ‘counting probabilities’ by De Witt and others). This leaves some people pining for the relative simplicity and elegance of Bohm’s theory.
(Bohm’s theory is precisely “QM without collapse + some extra machinery to account for observations”. The only differences are that (a) the extra machinery describes a ‘single universe’ rather than a multiverse and (b) it doesn’t pretend to be the inevitable, a priori ‘unpacking’ of empirical predictions which are already implicit in “QM without collapse”.)
I’m not sure that’s in any debate or that it should be. MWI and the copenhagen interpretation do produce identical pictures for a person inside. The physics is identical—what’s really different is the ontology. The Bohm theory produces the same results in the non-relativistic picture but apparently has some problems with going relativistic which aren’t resolved. And since relativity definitely is true, that’s a problem.
(a) The Copenhagen Interpretation is incoherent, and for that reason it’s obviously wrong. I wish everyone would just agree never to talk about it again.
(b) It’s a delicate philosophical question whether and how far a formal mathematical theory produces a “picture for the person inside”.
(c) MWI can mean several different things. If, temporarily, we take it to mean “QM without collapses or hidden variables + whatever subjective consequences follow from that” then you have somewhere between a lot of work and an impossible task to deduce the Born probabilities.
(d) I’m not a Bohmian. I think ultimately we will attain a satisfactory understanding of how the Born probabilities are, after all, ‘deducible’ from the rest of QM. Some promising directions include Zurek’s work on decoherence and einselection, Hanson’s notion of “Mangled Worlds”, and cousin_it’s ideas about how to ‘reduce’ the notion of probability to other things.
(a) By the copenhagen interpretation, do you mean what I meant, i.e. the status-quo interpretation used in most of physics? Would you please explain how it’s obviously incoherent?
(b, c, and some discouragement for d) The delicacy is more a property of philosophers than the universe. If people are built out of ordinary matter, then in thought experiments (e.g. Schrodinger’s cat, but with people) we can swap them for any other system with the right number of possible states (one state per possible outcome). Since we don’t ever subjectively experience being in a superposition, it’s fairly obvious that you have to get rid of the resulting superposition if you ask what a person subjectively experiences. To do that you want to do a thing called tracing out the environment (Sorry, no good page on this, but it’s this operation) that in short changes entanglement to probabilistic correlation. 3 guesses what the probabilities are (though this really just moves the information about Born probabilities are from an explicit rule to the rules of density matricies, so it’s not really “deducible”).
If the wavefunction collapses upon measurement, and no adequate definition of the term “measurement” has been given, then the theory as it stands is incoherent. (I realize that a Copenhagenist thinks that they can get around this by simply denying that the wavefunction exists, but the price of that move is that they don’t have any coherent picture of reality underneath the mathematics.)
Since we don’t ever subjectively experience being in a superposition
OK, but let’s note in passing that the MWI believer needs show this a priori, whether from the mathematics of QM or by deconstructing the concept of “experiencing a superposition” or both. I don’t think that should be particularly difficult, though.
To do that you want to do a thing called tracing out the environment
I know what that means.
The trouble is at some point you need to explain why our measurement records show sequences of random outcomes distributed according to the Born rules. Now it may be the case that branches where measurement records show significant deviation from the Born rules have ‘low amplitude’, but then you need to explain why we don’t experience ourselves in “low amplitude branches”. More precisely, you need to explain why we seem to experience ourselves in branches at rates proportional to the norm squared of the amplitudes of the branches (whatever this talk of ‘experiencing ourselves’ is supposed to mean, and whatever a ‘branch’ is supposed to be). Why should that be true? Why shouldn’t the subjective probabilities simply be a matter of ‘counting up’ branches irrespective of their weighting? After all, the wavefunction still contains all of the information even about ‘lightly weighted’ branches.
The MWI believer thinks that, by talking about reduced density matrices obtained by ‘tracing out’ the environment, you’ve thereby made good progress towards showing where the Born probabilities come from. But the MWI skeptic thinks that the ‘last little bit’ that you still have to do (i.e. explaining why we experience ourselves in heavily weighted branches more than lightly weighted ones) is and always was the entire problem.
no adequate definition of the term “measurement” has been given
Not even the quantum-information-ey definition of “transfer of information from the measuree to the measurer?”
I realize that a Copenhagenist thinks that they can get around this by simply denying that the wavefunction exists
What I’m taking from this is that you don’t know many copenhagenists familiar with Bell’s theorem.
The trouble is at some point you need to explain why our measurement records show sequences of random outcomes distributed according to the Born rules.
Well I suppose I could tell the MWI (or any sort of) skeptic about what density matricies and mixed states are (other readers: to wikipedia!), and how when you see a mixed state it is by definition probabilistic.
But the MWI skeptic thinks that the ‘last little bit’ that you still have to do (i.e. explaining why we experience ourselves in heavily weighted branches more than lightly weighted ones) is and always was the entire problem.
Well, even after the next question of “why do density matrices work that way?”, you can always ask “why?” one more time. But eventually we, having finite information, will always end with something like “because it works.” So how can we judge explanations? Well, one “why” deeper is good enough for me.
3 guesses what the probabilities are
Sorry, I don’t know what you’re talking about.
Tsk, fine, 0 guesses then: the probabilities you get from tracing out the environment are the Born probabilities. But as I said this doesn’t count as deducing them, they’re hidden in the properties of density matrices, which were in turn determined using the Born probabilities.
Not even the quantum-information-ey definition of “transfer of information from the measuree to the measurer?”
That sounds fine, but there’s no objective way of defining what a “measurer” is. So essentially what you have is a ‘solipsistic’ theory, that predicts “the measurer’s” future measurements but refuses to give any determinate picture of the “objective reality” of which the measurer herself is just a part.
I have to concede that many thinkers are prepared to live with this, and scale down their ambitions about the scientific enterprise accordingly, but it seems unsatisfactory to me. Surely there is such a thing as “objective reality”, and I think science should try to tell us what it’s like.
What I’m taking from this is that you don’t know many copenhagenists familiar with Bell’s theorem.
Well, even after the next question of “why do density matrices work that way?”, you can always ask “why?” one more time. But eventually we, having finite information, will always end with something like “because it works.” So how can we judge explanations? Well, one “why” deeper is good enough for me.
Perhaps. My post in the discussion section, and my subsequent comments, try (and fail!) to explain as clearly as I can what troubles me about MWI.
It’s true that an MWI non-collapsing wavefunction has ‘enough information’ to pin down the Born probabilities, and it’s also true that you can’t get the empirical predictions exactly right unless you simulate the entire wavefunction. But it still seems to me that in some weird sense the wavefunction contains ‘too much’ information, in the same way that simulating a classically indeterministic universe by modelling all of its branches gives you ‘too much information’.
But it still seems to me that in some weird sense the wavefunction contains ‘too much’ information, in the same way that simulating a classically indeterministic universe by modelling all of its branches gives you ‘too much information’.
I know what you mean, but as I’m sure you know it’s not mere perversity that has led many to accept “modeling all the branches” of the QM universe. In the case of a classically indeterministic universe, you can model just one indeterministic branch, but in the case of the QM universe you can’t do that, or you can’t do it anywhere near as satisfactorily. The “weirdness” of QM is precisely that aspect of it which (in the eyes of many) forces us to accept the reality of all the branches.
Is there a formal expression of the theory of measurement (in a universally agreed upon language) where [the fact that one doesn’t need to ‘add something’ to obtain a many worlds theory, whereas one does need to ‘add something’ to obtain a single world theory] can be demonstrated?
Based on your previous replies to me, it’s evident that you both believe in, and have a fairly sophisticated understanding of, the idea that you can extract the empirical predictions of quantum mechanics from unitary evolution alone (with no hidden variables) and without ‘adding anything’ (like Many Minds or whatever).
Since one obviously does need to ‘add something’ (e.g. rules about collapse, or Bohmian trajectories) in order to obtain a ‘single universe’ theory, it sounds as though you’ve answered your own question. Or at least, it’s not clear to me what kind of answer you were expecting other than that. (I don’t understand how it helps or why it’s necessary to use a ‘formal expression of the theory of measurement’, or even what such a thing would mean.)
Every interpretation is “adding something.” Just because interpreters choose to bundle their extra mechanisms in vague English language “interpretations” rather than mathematical models does not mean they aren’t extra mechanisms. Copenhagen adds an incoherent and subjective entity called “the observer.” MWI adds a preposterous amount of mechanism involving an infinite and ever-exponentially-expanding number of completely unobservable clone universes. Copenhagen grossly violates objectivity and MWI grossly violates Occam’s Razor. Also, MWI needs a way to determine when a “world” splits, or to shove the issue under the rug, every bit as much as collapse theories need to figure out or ignore when collapse occurs. If as many “interpreters” like to claim QM itself is just the wavefunction, then collapse and world-splits are both extra mechanisms.
But QM is not just the wavefunction. QM is also the Born probabilities. The wavefunction predicts nothing if we do not square it to find the probabilities of the events we actually observe. Of all the interpretations, objective collapse adds the least to quantum mechanics as it is actually practiced. Everybody who uses QM for practical purposes uses the Born probabilities or the direct consequences thereof (e.g. spectra). Thus—despite the many who shudder at the nondeterminism of the universe and thus come up with interpretations like Copenhagen and MWI to try to turn inherent nondeterminism into mere subjective ignorance—the nondeterministic quantum event whereby a superposition of eigenvectors reduces to a single eigenvector (and the various other isomorphic ways this can be mathematically represented) is every bit as central to QM as the nominally deterministic wavefunction. The Born probabilities are not in any way “extra mechanism” they are central to QM. Even more central than the wavefunction, because all that we observe directly are the Born random events. The wavefunction we never observe directly, but only infer it as defining the probability distribution of the nondeterministic events we do observe.
Thus any interpretation of QM as it is actually practiced must take the Born probabilities as being at least as objective and physical as the wavefunction. If the Born probabilities are objective, we have objective collapse, and neither Copenhagen nor MWI are true.
Wikipedia has a bare-bones description of objective collapse:
Further experimental evidence: if the Born probabilities do not represent an objective and physical randomness that is inherent to the universe, then the EPR/Bell/Aspect/et,. al. work tells us that FTL signaling (and more importantly a variety of related paradoxes, FTL signaling not itself being paradoxical in QM) is possible. QM is not special relativity. Special relativity can’t explain the small scale or even certain macroscale effects like diffraction that QM explains. Special relativity is just an emergent large-scale special case of QM (specifically of QFT), it is QM that is fundamental. QM itself, in the EPR/et. al. line of work, tells is that it is the objective and physical randomness inherent in the universe, not causal locality, that stands in the way of FTL signaling and its associated paradoxes.
MWI adds a preposterous amount of mechanism involving an infinite and ever-exponentially-expanding number of completely unobservable clone universes.
There’s no mechanism to it other than the mechanism that every interpretation of QM already has for describing the evolution of non-macroscopic quantum systems. MWI just says that large systems and small systems aren’t separate magisteria with different laws.
Also, MWI needs a way to determine when a “world” splits, or to shove the issue under the rug, every bit as much as collapse theories need to figure out or ignore when collapse occurs.
“Worlds” and “branching” are epiphenomenal concepts; they’re simplifications of what MWI actually talks about (see Decoherence is Pointless).
It doesn’t matter whether branching occurs at a point of or at during some blob of time, probabilistic or otherwise, it’s a central part of MWI and you need an equation to describe when it happens. And that equation should agree with the Born probabilities up to our observational limits. Likewise for collapse in theories that invoke collapse. Otherwise it’s just hand-waving not science.
What is or is not a “branch” is unimportant. If you have read the link you’ll know that a “branch” is not a point mass but a blob spread out in configuration space. All MWI needs is “the probability density of finding oneself in point x in the wavefunction is the amplitude squared at that point”. It’s standard probability theory then to integrate over a “branch” to find your probability of being in that branch. But the only reason to care about “branches” is because the world looks precisely identical to an observer at every point in that branch.
Not a clue. But in this particular case, the argument is that the theory without mutliple worlds is precisely the multi-worlds theory with an extra postulate, so it’s certainly more complicated.
It would really help if some people who knew about the relevant parts of the Sequences lurked around to aid the confused!
“Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit.”
Run time is at least as important as length. If we want to simulate evolution of the wavefunction on a computer, do we get a more accurate answer of more phenomena by computing an exploding tree of alternatives that don’t actually significantly influence anything that we can ever observe, or does the algorithm explain more by pruning these irrelevant branches and elaborating the branches that actually make an observable difference? We save exponential time and thus explain exponentially more by pruning the branches.
“It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!”
Neither. QM is objective and the other half is chopped off because decoherence created a mutually exclusive alternative. This presents no more problem for my interpretation (which might be called “quantum randomness is objective” or “God plays dice, get over it”) than it does for MWI (when does a “world” branch off?) It’s the sorities paradox either way.
“The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real?”
Yes, for the same reason we delete other imagined but unobserved things like Santa Claus, absolute space, and the aether from what we consider real. If we don’t observe them and they are unnecessary for explaining the world we do see, they don’t belong in science.
You’re arguing about something that seems interesting and possibly important, but it doesn’t sound like the mathematical likelihood of the theory. Eliezer starts from a Bayesian interpretation of this number as a rational degree of belief, theoretically determined by the evidence we have. As I understand it, this quantity has a correct value, and the question of how much the theory explains has a definite answer, whether or not we can calculate it. The alternate Discordian or solipsistic view has much to recommend it but runs into problems if we take it as a general principle.
Now run time has no obvious effect on likelihood of truth. I don’t know if message length does either, but at least we have an argument for this (see Solomonoff induction). And the claim that MWI adds an extra postulate of its own seems false. MWI tries to follow Occam’s Razor—in a form that seems to agree with Solomonoff and Isaac Newton—by saying that no causes exist but arrows attached to large sets of numbers, and the function that attaches them. Everything you call magical or imaginary follows directly from this.
Before moving on to the problem with this interpretation, please note that Bayesianism also gives a different account of “unobserved things”. Some of them, like aether and possibly absolute space, decrease the prior likelihood of a theory by adding extra assumptions to the math. (Eliezer argues this applies to objective collapse.) Others, like Santa Claus, would increase the probability of evidence we do not observe. This has no relevance for alternate worlds. The evidence you seem to want has roughly zero probability in the theory you criticize, so its absence doesn’t tell us anything. The argument for adopting the theory lies elsewhere, in the success of quantum math.
Now obviously the Born rule creates a problem for this argument. The theory has a great big mathematical hole in it. But from this Bayesian perspective, and going by the information I have so far, we have no reason to think that whatever fills the hole will reduce the number of “worlds” to exactly one, any more than we have reason to believe in exactly 666 worlds. It really does seem that simple. And from what I’ve managed to read of Feynman and Hibbs the authors definitely believe in more than one world. (“From what does the uncertainty arise? Almost without doubt it arises from the need to amplify the effects of single atomic events to such a level that they may be readily observed by large systems.” p.22) So I don’t think my simple view results from ignorance of QM as it existed then.
It sure seems to me as though Huve Erett advocates objective collapse. Maybe you can explain what part of the dialog convinces you that Huve Erett can’t be talking about objective collapse.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement. “Measurement” is a Copenhagen thing.
“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”
Again, this describes Copenhagen (or even Conscious Collapse, which is even worse). Objective collapse depends on neither measurements nor measurers.
Much of the rest of this parody might be characterized as a preposterously unfair roast of collapse theories, objective or otherwise, but the trouble is all the valid criticisms also apply to MWI. For example “the only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous” also applies to the law that is necessary for any actually scientific account of MWI, but that MWI people sweep under the rug with incoherent talk about “decoherence”, namely when “worlds” “split” such that we “find ourselves” in one but not the other. AFAIK, no MWI proponent has ever proposed a linear, unitary, or differentiable function that predicts such a split that is consistent with what we actually observe in QM. And they couldn’t, because “world split” is nearly isomorphic with “collapse”—it’s just an excessive way of saying the same thing. If MWI came up with an objective “world branch” function it would serve equallywell, or even better given Occam’s Razor, as an objective collapse function. In both MWI and collapse part of the wave function effectively disappears from the observable universe—MWI only adds a gratuitous extra mechanism, that it re-appears in another, imaginary, unobservable “world.”
BTW, the standard way that QM treats the nondeterministic event predicted probabilistically by the wavefunction and the Born probabilities (whether you choose to call such event “collapse”, “decoherence”, “branching worlds”, or otherwise) is completely non-linear, non-unitary, non-differentiable and discontinuous, and worst of all, nondeterminstic (horrors!). In the matrix model, the “collapse”, if you will forgive the phrase, of a large (often infinite) set of possible eigenvalues and corresponding eigenvectors to one, the one we actually observe, according to the Born probabilities. No matter how much “interpreters” try to sweep this under the rug this nondeterminstic disappearance of all eigenvectors (or their isomorphs in other algebras) save one is central to real-world QM math and if it weren’t so it wouldn’t predict the quantum events we actually observe. So the dispute here is with QM itself, not with collapse theories.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement.
Well, I don’t agree with the “vs”, but let that pass, since then the dialog quickly continues:
Then he reaches out for the paper, scratches out “When you perform a measurement on a quantum system”, and writes in, “When a quantum superposition gets too large.”
That occurs as early as one fourth of the way through the dialog, so that leaves three fourths of the dialog addressing what you are apparently calling an objective collapse theory.
Eliezer thinks objective collapse = Copenhagen. More precisely, I’ve never seen him distinguish the two, or acknowledge the possibility of denying that the wavefunction exists.
When an object leaves our Hubble volume does it cease to exist?
Run time is at least as important as length.
It’s reasonable to assume run time is important, but problematic to formalize. Run time is much more dependent on the underlying computational abstraction than description length is. Is the computer sequential? parallel? non-deterministic? quantum?
Depending on the underlying computer model MWI could actually be faster than a collapse hypothesis. MWI is totally local, hence easily parallelizable. Collapse hypotheses however require non-local communication, which create severe bottlenecks for parallel simulations.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another.
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
This leaves the Born probabilities totally unexplained.
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
Imagine a universe containing an infinite line of apples. You can see them getting smaller into the distance, until eventually it’s not possible to resolve individual apples. Do you want to say that we could never justify or regard-as-scientific a theory which said “this line of apples is infinite”?
Like another Dave almost three years ago, I think this post was the most effective so far. Not as in ‘constructed better’, because I suspect that almost everything in previous posts in the QM series and quite a lot in posts elsewhere was building up to this.
I’d been getting used to thinking in terms of sensors being entangled with the particles they sense etc. but references to humans being entangled too seemed to be somewhere between obvious and avoiding the issue: I didn’t feel what that meant. In this thread I’d got to the bottom and was wondering why we were talking about a physically infinite universe when the message of halfway through internalised.
Whether I’ll be persuaded of interpretations on QM is unclear, as I have little maths and less physics so I feel hideously under-qualified to judge based on one side of the argument given that it’s perfectly plausible that the counter-argument relies on tools that I don’t have available. But in terms of the aim of making QM seem reasonable, and non-mysterious this is doing astonishingly well. Given that at a certain level I found the mysteriousness quite reassuring, that’s a particularly tough job.
Let me join all those observing that these are great explanations of QM. But I don’t get why we need to invoke MWI and the Ebborians. If the wavefunction evolves into
(Human-LEFT Sensor-LEFT Atom-LEFT) + (Human-RIGHT Sensor-RIGHT Atom-RIGHT)
but we only observe
(Human-LEFT Sensor-LEFT Atom-LEFT)
then it makes far more sense to me that, rather than conjuring up a completely unobservable universe with clones of ourselves where (Human-RIGHT Sensor-RIGHT Atom-RIGHT) happened, a far more empirical explanation is that it simply didn’t happen. Half of the wavefunction disappears, nondeterministically. Why, as Occam might say, multiply trees beyond necessity? Prune them instead. Multiple “worlds” strike me as no more necessary than the aether or absolute space.
This is addressed in Decoherence is Simple.
(Also, the tag doesn’t work because Less Wrong uses Markdown formatting for comments; if you click “Help” under the comment box you can see a reference to some of the more common constructions.)
(Also, welcome to Less Wrong!)
I think the obvious reply here is ‘keep reading to the end of the Sequence’! After all, quite a lot of space is devoted to looking at different models.
On the Occcam’s razor point, the question is what we’re endeavouring to make simple in our theories. Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit. It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!
The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real? The post addressing that question in a context divorced from QM: http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
Is there a formal expression of the theory of measurement (in a universally agreed upon language) where this can be demonstrated?
I’m fairly certain the answer is “no”.
Basically, what the MWI believer wants to argue is that in a hypothetical universe where we had no hidden variables and no collapses—nothing but unitary evolution under the Schrödinger equation—observers would still have experiences where it ‘seems as if’ there is only one universe, and it ‘seems as if’ their measurement outcomes are probabilistic as described by the Born rule.
On the other hand, the MWI skeptic denies that the formal description of the theory suffices to determine “how things would seem if it were true” without extra mathematical machinery.
Unfortunately, the extra machinery that people propose, in order to bridge the gap between theory and observation, tends to be some combination of complicated, arbitrary-seeming, ugly and inadequate (e.g. the Many Minds of Albert and Loewer, or the various attempts to reduce Born probabilities to ‘counting probabilities’ by De Witt and others). This leaves some people pining for the relative simplicity and elegance of Bohm’s theory.
(Bohm’s theory is precisely “QM without collapse + some extra machinery to account for observations”. The only differences are that (a) the extra machinery describes a ‘single universe’ rather than a multiverse and (b) it doesn’t pretend to be the inevitable, a priori ‘unpacking’ of empirical predictions which are already implicit in “QM without collapse”.)
I’m not sure that’s in any debate or that it should be. MWI and the copenhagen interpretation do produce identical pictures for a person inside. The physics is identical—what’s really different is the ontology. The Bohm theory produces the same results in the non-relativistic picture but apparently has some problems with going relativistic which aren’t resolved. And since relativity definitely is true, that’s a problem.
(a) The Copenhagen Interpretation is incoherent, and for that reason it’s obviously wrong. I wish everyone would just agree never to talk about it again.
(b) It’s a delicate philosophical question whether and how far a formal mathematical theory produces a “picture for the person inside”.
(c) MWI can mean several different things. If, temporarily, we take it to mean “QM without collapses or hidden variables + whatever subjective consequences follow from that” then you have somewhere between a lot of work and an impossible task to deduce the Born probabilities.
(d) I’m not a Bohmian. I think ultimately we will attain a satisfactory understanding of how the Born probabilities are, after all, ‘deducible’ from the rest of QM. Some promising directions include Zurek’s work on decoherence and einselection, Hanson’s notion of “Mangled Worlds”, and cousin_it’s ideas about how to ‘reduce’ the notion of probability to other things.
(a) By the copenhagen interpretation, do you mean what I meant, i.e. the status-quo interpretation used in most of physics? Would you please explain how it’s obviously incoherent?
(b, c, and some discouragement for d) The delicacy is more a property of philosophers than the universe. If people are built out of ordinary matter, then in thought experiments (e.g. Schrodinger’s cat, but with people) we can swap them for any other system with the right number of possible states (one state per possible outcome). Since we don’t ever subjectively experience being in a superposition, it’s fairly obvious that you have to get rid of the resulting superposition if you ask what a person subjectively experiences. To do that you want to do a thing called tracing out the environment (Sorry, no good page on this, but it’s this operation) that in short changes entanglement to probabilistic correlation. 3 guesses what the probabilities are (though this really just moves the information about Born probabilities are from an explicit rule to the rules of density matricies, so it’s not really “deducible”).
If the wavefunction collapses upon measurement, and no adequate definition of the term “measurement” has been given, then the theory as it stands is incoherent. (I realize that a Copenhagenist thinks that they can get around this by simply denying that the wavefunction exists, but the price of that move is that they don’t have any coherent picture of reality underneath the mathematics.)
OK, but let’s note in passing that the MWI believer needs show this a priori, whether from the mathematics of QM or by deconstructing the concept of “experiencing a superposition” or both. I don’t think that should be particularly difficult, though.
I know what that means.
The trouble is at some point you need to explain why our measurement records show sequences of random outcomes distributed according to the Born rules. Now it may be the case that branches where measurement records show significant deviation from the Born rules have ‘low amplitude’, but then you need to explain why we don’t experience ourselves in “low amplitude branches”. More precisely, you need to explain why we seem to experience ourselves in branches at rates proportional to the norm squared of the amplitudes of the branches (whatever this talk of ‘experiencing ourselves’ is supposed to mean, and whatever a ‘branch’ is supposed to be). Why should that be true? Why shouldn’t the subjective probabilities simply be a matter of ‘counting up’ branches irrespective of their weighting? After all, the wavefunction still contains all of the information even about ‘lightly weighted’ branches.
The MWI believer thinks that, by talking about reduced density matrices obtained by ‘tracing out’ the environment, you’ve thereby made good progress towards showing where the Born probabilities come from. But the MWI skeptic thinks that the ‘last little bit’ that you still have to do (i.e. explaining why we experience ourselves in heavily weighted branches more than lightly weighted ones) is and always was the entire problem.
Sorry, I don’t know what you’re talking about.
Not even the quantum-information-ey definition of “transfer of information from the measuree to the measurer?”
What I’m taking from this is that you don’t know many copenhagenists familiar with Bell’s theorem.
Well I suppose I could tell the MWI (or any sort of) skeptic about what density matricies and mixed states are (other readers: to wikipedia!), and how when you see a mixed state it is by definition probabilistic.
Well, even after the next question of “why do density matrices work that way?”, you can always ask “why?” one more time. But eventually we, having finite information, will always end with something like “because it works.” So how can we judge explanations? Well, one “why” deeper is good enough for me.
Tsk, fine, 0 guesses then: the probabilities you get from tracing out the environment are the Born probabilities. But as I said this doesn’t count as deducing them, they’re hidden in the properties of density matrices, which were in turn determined using the Born probabilities.
That sounds fine, but there’s no objective way of defining what a “measurer” is. So essentially what you have is a ‘solipsistic’ theory, that predicts “the measurer’s” future measurements but refuses to give any determinate picture of the “objective reality” of which the measurer herself is just a part.
I have to concede that many thinkers are prepared to live with this, and scale down their ambitions about the scientific enterprise accordingly, but it seems unsatisfactory to me. Surely there is such a thing as “objective reality”, and I think science should try to tell us what it’s like.
That may very well be true, but how does pointing out that the Copenhagen Interpretation denies the objective existence of the wavefunction entail it?
Perhaps. My post in the discussion section, and my subsequent comments, try (and fail!) to explain as clearly as I can what troubles me about MWI.
It’s true that an MWI non-collapsing wavefunction has ‘enough information’ to pin down the Born probabilities, and it’s also true that you can’t get the empirical predictions exactly right unless you simulate the entire wavefunction. But it still seems to me that in some weird sense the wavefunction contains ‘too much’ information, in the same way that simulating a classically indeterministic universe by modelling all of its branches gives you ‘too much information’.
I know what you mean, but as I’m sure you know it’s not mere perversity that has led many to accept “modeling all the branches” of the QM universe. In the case of a classically indeterministic universe, you can model just one indeterministic branch, but in the case of the QM universe you can’t do that, or you can’t do it anywhere near as satisfactorily. The “weirdness” of QM is precisely that aspect of it which (in the eyes of many) forces us to accept the reality of all the branches.
Finally, let me return to your original question:
Based on your previous replies to me, it’s evident that you both believe in, and have a fairly sophisticated understanding of, the idea that you can extract the empirical predictions of quantum mechanics from unitary evolution alone (with no hidden variables) and without ‘adding anything’ (like Many Minds or whatever).
Since one obviously does need to ‘add something’ (e.g. rules about collapse, or Bohmian trajectories) in order to obtain a ‘single universe’ theory, it sounds as though you’ve answered your own question. Or at least, it’s not clear to me what kind of answer you were expecting other than that. (I don’t understand how it helps or why it’s necessary to use a ‘formal expression of the theory of measurement’, or even what such a thing would mean.)
Every interpretation is “adding something.” Just because interpreters choose to bundle their extra mechanisms in vague English language “interpretations” rather than mathematical models does not mean they aren’t extra mechanisms. Copenhagen adds an incoherent and subjective entity called “the observer.” MWI adds a preposterous amount of mechanism involving an infinite and ever-exponentially-expanding number of completely unobservable clone universes. Copenhagen grossly violates objectivity and MWI grossly violates Occam’s Razor. Also, MWI needs a way to determine when a “world” splits, or to shove the issue under the rug, every bit as much as collapse theories need to figure out or ignore when collapse occurs. If as many “interpreters” like to claim QM itself is just the wavefunction, then collapse and world-splits are both extra mechanisms.
But QM is not just the wavefunction. QM is also the Born probabilities. The wavefunction predicts nothing if we do not square it to find the probabilities of the events we actually observe. Of all the interpretations, objective collapse adds the least to quantum mechanics as it is actually practiced. Everybody who uses QM for practical purposes uses the Born probabilities or the direct consequences thereof (e.g. spectra). Thus—despite the many who shudder at the nondeterminism of the universe and thus come up with interpretations like Copenhagen and MWI to try to turn inherent nondeterminism into mere subjective ignorance—the nondeterministic quantum event whereby a superposition of eigenvectors reduces to a single eigenvector (and the various other isomorphic ways this can be mathematically represented) is every bit as central to QM as the nominally deterministic wavefunction. The Born probabilities are not in any way “extra mechanism” they are central to QM. Even more central than the wavefunction, because all that we observe directly are the Born random events. The wavefunction we never observe directly, but only infer it as defining the probability distribution of the nondeterministic events we do observe.
Thus any interpretation of QM as it is actually practiced must take the Born probabilities as being at least as objective and physical as the wavefunction. If the Born probabilities are objective, we have objective collapse, and neither Copenhagen nor MWI are true.
Wikipedia has a bare-bones description of objective collapse:
http://en.wikipedia.org/wiki/Objective_collapse_theory
Further experimental evidence: if the Born probabilities do not represent an objective and physical randomness that is inherent to the universe, then the EPR/Bell/Aspect/et,. al. work tells us that FTL signaling (and more importantly a variety of related paradoxes, FTL signaling not itself being paradoxical in QM) is possible. QM is not special relativity. Special relativity can’t explain the small scale or even certain macroscale effects like diffraction that QM explains. Special relativity is just an emergent large-scale special case of QM (specifically of QFT), it is QM that is fundamental. QM itself, in the EPR/et. al. line of work, tells is that it is the objective and physical randomness inherent in the universe, not causal locality, that stands in the way of FTL signaling and its associated paradoxes.
There’s no mechanism to it other than the mechanism that every interpretation of QM already has for describing the evolution of non-macroscopic quantum systems. MWI just says that large systems and small systems aren’t separate magisteria with different laws.
“Worlds” and “branching” are epiphenomenal concepts; they’re simplifications of what MWI actually talks about (see Decoherence is Pointless).
It doesn’t matter whether branching occurs at a point of or at during some blob of time, probabilistic or otherwise, it’s a central part of MWI and you need an equation to describe when it happens. And that equation should agree with the Born probabilities up to our observational limits. Likewise for collapse in theories that invoke collapse. Otherwise it’s just hand-waving not science.
What is or is not a “branch” is unimportant. If you have read the link you’ll know that a “branch” is not a point mass but a blob spread out in configuration space. All MWI needs is “the probability density of finding oneself in point x in the wavefunction is the amplitude squared at that point”. It’s standard probability theory then to integrate over a “branch” to find your probability of being in that branch. But the only reason to care about “branches” is because the world looks precisely identical to an observer at every point in that branch.
Not a clue. But in this particular case, the argument is that the theory without mutliple worlds is precisely the multi-worlds theory with an extra postulate, so it’s certainly more complicated.
It would really help if some people who knew about the relevant parts of the Sequences lurked around to aid the confused!
“Eliezer’s argument is that multiple worlds require no additions to the length of the theory if it was formally expressed, whereas a ‘deleting worlds’ function is additional. It’s also unclear where it would kick in, what ‘counts’ as a sufficiently fixed function to chop off the other bit.”
Run time is at least as important as length. If we want to simulate evolution of the wavefunction on a computer, do we get a more accurate answer of more phenomena by computing an exploding tree of alternatives that don’t actually significantly influence anything that we can ever observe, or does the algorithm explain more by pruning these irrelevant branches and elaborating the branches that actually make an observable difference? We save exponential time and thus explain exponentially more by pruning the branches.
“It’s not clear from your post if you think the other half’s chopped off because we haven’t observed it, or we don’t observe it because it’s chopped off!”
Neither. QM is objective and the other half is chopped off because decoherence created a mutually exclusive alternative. This presents no more problem for my interpretation (which might be called “quantum randomness is objective” or “God plays dice, get over it”) than it does for MWI (when does a “world” branch off?) It’s the sorities paradox either way.
“The other point is that if we are ‘Human-LEFT’ then we don’t expect the other part of the wave function to be observable to us. Does that mean we delete it from what is real?”
Yes, for the same reason we delete other imagined but unobserved things like Santa Claus, absolute space, and the aether from what we consider real. If we don’t observe them and they are unnecessary for explaining the world we do see, they don’t belong in science.
You’re arguing about something that seems interesting and possibly important, but it doesn’t sound like the mathematical likelihood of the theory. Eliezer starts from a Bayesian interpretation of this number as a rational degree of belief, theoretically determined by the evidence we have. As I understand it, this quantity has a correct value, and the question of how much the theory explains has a definite answer, whether or not we can calculate it. The alternate Discordian or solipsistic view has much to recommend it but runs into problems if we take it as a general principle.
Now run time has no obvious effect on likelihood of truth. I don’t know if message length does either, but at least we have an argument for this (see Solomonoff induction). And the claim that MWI adds an extra postulate of its own seems false. MWI tries to follow Occam’s Razor—in a form that seems to agree with Solomonoff and Isaac Newton—by saying that no causes exist but arrows attached to large sets of numbers, and the function that attaches them. Everything you call magical or imaginary follows directly from this.
Before moving on to the problem with this interpretation, please note that Bayesianism also gives a different account of “unobserved things”. Some of them, like aether and possibly absolute space, decrease the prior likelihood of a theory by adding extra assumptions to the math. (Eliezer argues this applies to objective collapse.) Others, like Santa Claus, would increase the probability of evidence we do not observe. This has no relevance for alternate worlds. The evidence you seem to want has roughly zero probability in the theory you criticize, so its absence doesn’t tell us anything. The argument for adopting the theory lies elsewhere, in the success of quantum math.
Now obviously the Born rule creates a problem for this argument. The theory has a great big mathematical hole in it. But from this Bayesian perspective, and going by the information I have so far, we have no reason to think that whatever fills the hole will reduce the number of “worlds” to exactly one, any more than we have reason to believe in exactly 666 worlds. It really does seem that simple. And from what I’ve managed to read of Feynman and Hibbs the authors definitely believe in more than one world. (“From what does the uncertainty arise? Almost without doubt it arises from the need to amplify the effects of single atomic events to such a level that they may be readily observed by large systems.” p.22) So I don’t think my simple view results from ignorance of QM as it existed then.
You’re almost exactly playing the part of Huve Erett in this dialog:
http://lesswrong.com/lw/q7/if_manyworlds_had_come_first/
Emphasize the “almost”. I’m advocating objective collapse, not Copenhagen.
It sure seems to me as though Huve Erett advocates objective collapse. Maybe you can explain what part of the dialog convinces you that Huve Erett can’t be talking about objective collapse.
“This happens when, way up at the macroscopic level, we ‘measure’ something.”
vs. in objective collapse, when the collapse occurs has no necessary relationship to measurement. “Measurement” is a Copenhagen thing.
“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”
Again, this describes Copenhagen (or even Conscious Collapse, which is even worse). Objective collapse depends on neither measurements nor measurers.
Much of the rest of this parody might be characterized as a preposterously unfair roast of collapse theories, objective or otherwise, but the trouble is all the valid criticisms also apply to MWI. For example “the only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous” also applies to the law that is necessary for any actually scientific account of MWI, but that MWI people sweep under the rug with incoherent talk about “decoherence”, namely when “worlds” “split” such that we “find ourselves” in one but not the other. AFAIK, no MWI proponent has ever proposed a linear, unitary, or differentiable function that predicts such a split that is consistent with what we actually observe in QM. And they couldn’t, because “world split” is nearly isomorphic with “collapse”—it’s just an excessive way of saying the same thing. If MWI came up with an objective “world branch” function it would serve equallywell, or even better given Occam’s Razor, as an objective collapse function. In both MWI and collapse part of the wave function effectively disappears from the observable universe—MWI only adds a gratuitous extra mechanism, that it re-appears in another, imaginary, unobservable “world.”
BTW, the standard way that QM treats the nondeterministic event predicted probabilistically by the wavefunction and the Born probabilities (whether you choose to call such event “collapse”, “decoherence”, “branching worlds”, or otherwise) is completely non-linear, non-unitary, non-differentiable and discontinuous, and worst of all, nondeterminstic (horrors!). In the matrix model, the “collapse”, if you will forgive the phrase, of a large (often infinite) set of possible eigenvalues and corresponding eigenvectors to one, the one we actually observe, according to the Born probabilities. No matter how much “interpreters” try to sweep this under the rug this nondeterminstic disappearance of all eigenvectors (or their isomorphs in other algebras) save one is central to real-world QM math and if it weren’t so it wouldn’t predict the quantum events we actually observe. So the dispute here is with QM itself, not with collapse theories.
Well, I don’t agree with the “vs”, but let that pass, since then the dialog quickly continues:
That occurs as early as one fourth of the way through the dialog, so that leaves three fourths of the dialog addressing what you are apparently calling an objective collapse theory.
Eliezer thinks objective collapse = Copenhagen. More precisely, I’ve never seen him distinguish the two, or acknowledge the possibility of denying that the wavefunction exists.
When an object leaves our Hubble volume does it cease to exist?
It’s reasonable to assume run time is important, but problematic to formalize. Run time is much more dependent on the underlying computational abstraction than description length is. Is the computer sequential? parallel? non-deterministic? quantum?
Depending on the underlying computer model MWI could actually be faster than a collapse hypothesis. MWI is totally local, hence easily parallelizable. Collapse hypotheses however require non-local communication, which create severe bottlenecks for parallel simulations.
“Imagine a universe containing an infinite line of apples.”
If we did I would imagine it, but we don’t. In QM we don’t observe infinite anything, we observe discrete events. That some of the math to model this involves infinities may be merely a matter of convenience to deal with a universe that may merely have a very large but finite number of voxels (or similar), as suggested by Planck length and similar ideas.
“It’s reasonable to assume run time is important, but problematic to formalize.”
Run time complexity theory (and also memory space complexity, which also grows at least exponentially in MWI) is much easier to apply than Kolmogorov complexity in this context. Kolmogorov complexity only makes sense as an order of magnitude (i.e. O(f(x) not equal to merely a constant), because choice of language adds an (often large) constant to program length. So from Kolmogorov theory it doesn’t much matter than one adds a small extra constant amount of bits to one’s theory, making it problematic to invoke Kolmogorov theory to distinguish between different interpretations and equations that each add only a small constant amount of bits.
(Besides the fact that QM is really wavefunction + nondeterministic Born probability, not merely the nominally deterministic wave function on which MWI folks focus, and everybody needs some “collapse”/”world split” rule for when the nondeterministic event happens, so there really is not even any clear constant factor equation description length parsimony to MWI).
OTOH, MWI clearly adds at least an exponential (and perhaps worse, infinitely expanding at every step!) amount of run time, and a similar amount of required memory, not merely a small constant amount. As for the ability to formalize this there’s a big literature of run-time complexity that is similar to, but older and more mature than, the literature on Kolmogorov complexity.
I see. I think you are making a common misunderstanding of MWI (in fact, a misunderstanding I had for years). There is no actual branching in MWI, so the amount of memory required is constant. There is just a phase space (a very large phase space), and amplitudes at each point in the phase space are constantly flowing around and changing (in a local way).
If you had a computer with as many cores as there are points in the phase space then the simulation would be very snappy. On the other hand, using the same massive computer to simulate a collapse theory would be very slow.
This is an answer to a question from another person’s thread. My question was “When an object leaves our Hubble volume does it cease to exist?” I’m still curious to hear your answer.
That’s an easy one—objective collapse QM predicts that with astronomically^astronomically^astronomically high probability objects large enough to be seen at that distance (or even objects merely the size of ourselves) don’t cease to exist. Like everything else they continue to follow the laws of objective collapse QM whether we observe them or not.
The hypo is radically different from believing in an infinitely expanding infinity of parallel “worlds”, none of which we have ever observed, either directly or indirectly, and none of which are necessary for a coherent and objective QM theory.
Then I can define a new hypothesis, call it objective collapse++, which is exactly your objective collapse hypothesis with the added assumption that objects cease to exist outside of our Hubble volume. Collapse++ has a slightly longer description length, but it has a greatly reduced runtime. If we care about runtime length, why would we not prefer Collapse++ over the original collapse hypothesis?
See my above comment about MWI having a fixed phase space that doesn’t actually increase in size over time. The idea of an increasing number of parallel universes is incorrect.
“MWI having a fixed phase space that doesn’t actually increase in size over time.”
(1) That assumes we are already simulating the entire universe from the Big Bang forward, which is already preposterously infeasible (not to mention that we don’t know the starting state).
(2) It doesn’t model the central events in QM, namely the nondeterministic events which in MWI are interpreted as which “world” we “find ourselves” in.
Of course in real QM work simulations are what they are, independently of interpretations, they evolve the wavefunction, or a computationally more efficient but less accurate version of same, to the desired elaboration (which is radically different for different applications). For output they often either graph the whole wavefunction (relying on the viewer of the graph to understand that such a graph corresponds to the results of a very large number of repeated experiments, not to a particular observable outcome) or do a Monte Carlo or Markov simulation of the nondeterministic events which are central to QM. But I’ve never seen a Monte Carlo or Markov simulation of QM that simulates the events that supposedly occur in “other worlds” that we can never observe—it would indeed be exponentially (at least) more wasteful in time and memory, yet utterly pointless, for the same reasons that the interpretation itself is wasteful and pointless. You’d think that a good interpretation, even if it can’t produce any novel experimental predictions, could at least provide ideas for more efficient modeling of the theory, but MWI suggests quite the opposite, gratuitously inefficient ways to simulate a theory that is already extraordinarily expensive to simulate.
Objective collapse, OTOH, continually prunes the possibilities of the phase space and thus suggests exponential improvements in simulation time and memory usage. Indeed, some versions of objective collapse are bone fide new theories of QM, making experimental predictions that distinguish it from the model of perpetual elaboration of a wavefunction. Penrose for example bases his theory on a quantum gravity theory and several experiments have been proposed to test his theory.
BTW, it’s MWI that adds extra postulates. In both MWI and collapse, parts of the wavefunction effectively disappear from the observable universe (or as MWI folks like to say “the world I find myself in.”) MWI adds the extra and completely gratuitous postulate that this portion of the wave function magically re-appears in another, imaginary, completely unobservable “world”, on top of the gratuitous extra postulate that these new worlds are magically created, and all of us magically cloned, such that the copy of myself I experience finds me in one “world” but not another. And all that just to explain why we observe a nondeterministic event, one random eigenstate out of the infinity of eigenstates derived from the wavefunction and operator, instead of observing all of them.
Why not just admit that quantum events are objectively nondeterministic and be done with it? What’s so hard about that?
This does not correspond to the MWI as promulgated by Eliezer Yudkowsky, which is more like, “In MWI, parts of the wavefunction effectively disappear from the observable universe—full stop.” My understanding is that EY’s view is that chunks of the wavefunction decohere from one another. The “worlds” of the MWI aren’t something extra imposed on QM; they’re just a useful metaphor for decoherence.
This leaves the Born probabilities totally unexplained. This is the major problem with EY’s MWI, and has been fully acknowledged by him in posts made in years past. It’s not unreasonable that you would be unaware of this, but until you’ve read EY’s MWI posts, I think you’ll be arguing past the other posters on LW.
Upvoted, although my understanding is that there is no difference between Eliezer’s MWI and canonical MWI as originally presented by Everett. Am I mistaken?
Since I’m not familiar with Everett’s original presentation, I don’t know if you’re mistaken. Certainly popular accounts of MWI do seem to talk about “worlds” as something extra on top of QM.
Popular accounts written by journalists who don’t really understand what they are talking about may treat “worlds” as something extra on top of QM, but after reading serious accounts of MWI by advocates for over two decades, I have yet to find any informed advocate who makes that mistake. I am positive that Everett did not make that mistake.
I think that’s just a common misunderstanding most people have of MWI, unfortunately. Visualizing a giant decohering phase space is much harder than imagining parallel universes splitting off. I’m fairly certain that Eliezer’s presentation of MWI is the standard one though (excepting his discussion of timeless physics perhaps).
Mainstream philosophy of science claims to have explained the Born probabilities; Eliezer and some others here disagree with the explanations, but it’s at least worth noting that the quoted claim is controversial among those who have thought deeply about the question.
Good to know.
Imagine a universe containing an infinite line of apples. You can see them getting smaller into the distance, until eventually it’s not possible to resolve individual apples. Do you want to say that we could never justify or regard-as-scientific a theory which said “this line of apples is infinite”?