[SEQ RERUN] Decoherence is Falsifiable and Testable
Today’s post, Decoherence is Falsifiable and Testable was originally published on 07 May 2008. A summary (taken from the LW wiki):
(Note: Designed to be standalone readable.) An epistle to the physicists. To probability theorists, words like “simple”, “falsifiable”, and “testable” have exact mathematical meanings, which are there for very strong reasons. The (minority?) faction of physicists who say that many-worlds is “not falsifiable” or that it “violates Occam’s Razor” or that it is “untestable”, are committing the same kind of mathematical crime as non-physicists who invent their own theories of gravity that go as inverse-cube. This is one of the reasons why I, a non-physicist, dared to talk about physics—because I saw (some!) physicists using probability theory in a way that was simply wrong. Not just criticizable, but outright mathematically wrong: 2 + 2 = 3.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Decoherence is Simple, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
It’s too bad that EY completely misunderstands what makes a new physical model a good one. It’s not some calculation of Bayesian probabilities and Kolmogorov complexities, it is something you can check experimentally in a lab that your previous model did not predict. If not that, you might adopt a new theory if it provides easier ways of calculating something already observed. You do not rush into adopting something that has identical math, but just feels better, unless you are a philosopher of physics, i.e. someone feeding on crumbs and leftovers of those who do the real thing.
EY started by advocating the MWI, and ended up sliding down toward calling it decoherence, a perfectly good mathematical concept without any interpretational baggage. Does anyone else see the sleight of hand?
What a disappointing conclusion to a misplaced advocacy of a single interpretation. Where did his rationality go?
If you disagree with MWI, then… what dynamical principle describes the destruction of all the non-observed components after a measurement? It’s not built in to quantum dynamics as they stand.
I don’t disagree with the MWI, I just object to people privileging it over any other interpretation.
Like which? Copenhagen breaks every physical symmetry we have if you ever let it kick in, and is identical to MWI if you don’t. Some others, like the Bohm guide-wave include MWI but refuse to recognize that they did so (anything real enough to have a causal influence on reality is itself real. The guide-wave is thus real, and the One True Worldline superfluous). The bidirectional time interpretation is blatantly erroneous (measurement is not a time-symmetric process)...
I don’t know much about the transactional interpretation. Maybe that stands, but I suspect it leaves open the question of just what the transactions are between (if it doesn’t, then it’s making testable predictions at variance with QM as it stands, like Copenhagen)
Any others?
Can you unpack your argument against Bohm? Why does a real guide-wave require multiple worlds?
The guide-wave contains everything. It’s never collapsed, there’s no fade-away as you recede from the worldline. The rules governing the guide-wave are exactly those of quantum mechanics, which don’t mention any worldline. As far as the guide-wave is concerned, the worldline doesn’t exist. It’s a one-way connection. Right there we’ve got an oddity—the only entity in physics that would act without a reaction.
But setting that aside, the guide-wave implements the dynamics of branches not taken by the worldline. You see a nuclear decay? Nothing’s halting the guide-wave from implementing the portion of that decay that occurred in a different direction at a different time. So it does. So you have the dead cat component as well as the live cat component. Wigner’s friend is also still chilling out waiting for something to come up.
The guide-wave doesn’t know where the worldline went, so it keeps on ticking, following the time evolution operator—and, if you choose to break it into components, meticulously working out the consequences of every one of those components—whether or not the worldline happens to be anything like that.
If you want me to think something doesn’t exist, stop implementing its dynamics!
It doesn’t, except in the minds of confused people assigning ontological significance to a calculational prescription. No interpretation out there has more predictive power than “1. Do unitary evolution, 2. Apply Born rule”.
If you only consider it a calculational prescription and not an ontologically real thing, then you’ve totally just accepted MWI!
Not sure what you mean. Please feel free to elaborate on that.
I’m curious what you think about the historical contingency argument against this position (i.e. if MWI had come first...). I take it you’d say that which one of two empirically equivalent models is “adopted” can indeed sometimes (always?) depend on which one came first. If you would indeed say this, whence the difference between old and new? Is the new model inferior to the old one in virtue of being new? Or in virtue of being new and being invented while the old one was around? Or because newness is usually an indicator of retrofitting theory to data? Etc.
I must be really bad at expressing my points. Again, what really matters is whether the two models make different predictions. Failing that, whichever one is easier to use (simpler math) is better. The historical issues are for historians, not physicists.
The MWI uses the exact same math as the orthodox approach, so there is no reason to prefer it over any other interpretation.
Your process seems to privilege your current model over new models with identical predictions, which leads to belief hysteresis. Your point has also been fully answered in his sequence, so either you didn’t read it, forgot, or you’re deliberately strawmanning. And I wouldn’t sau that decoherence doesn’t have interpretational baggage, did you read the article you linked? Because it says under the interpretation section that copenhagen didn’t use decoherence and it’s at the core of mwi. And you’re lamenting Eliezer’s loss of rationality based on what, again?