Scott Aaronson’s cautious optimism for the MWI
http://www.scottaaronson.com/blog/?p=1103
Eliezer’s gung-ho attitude about the realism of the Many Worlds Interpretation always rubbed me the wrong way, especially in the podcast between both him and Scott (around 8:43 in http://bloggingheads.tv/videos/2220). I’ve seen a similar sentiment expressed before about the MWI sequences. And I say that still believing it to be the most seemingly correct of the available interpretations.
I feel Scott’s post does an excellent job grounding it as a possibly correct, and in-principle falsifiable interpretation.
As far as I can tell (being a non-physicist), the Transactional Interpretation shares the mathematical simplicity of MWI. And here Kastner and Cramer argue that TI can derive the Born probabilities naturally, whereas MWI is said to need a detour through “the application of social philosophy and decision theory to subjectively defined ‘rational’ observers”. So maybe TI is simpler.
The “possibilities” they posit seem quite parallel (pardon the pun) to the multiple worlds or bifurcated observers of MWI, so I don’t see the philosophical advantage there, that they tout. But integrating the Born probabilities more tightly into the physics is a plus, if true.
With respect to Born probabilities, TI is on the level of MWI, it has no derivation for them. Similarly, its ontology is rhetorical rather than rigorous.
A central issue for any zigzag-in-time or retrocausal theory of QM would be vacuum polarization, which was the stumbling block for the most serious effort, by Feynman and Wheeler. But Feynman-Wheeler theory is also where the path integral was born, so TI advocates could say, we just need to go back and finish it properly.
Stopped reading the linked paper when it made a mistake because of treating “worlds” as literal things being “split off.” Gotta use quantum mechanics if you’re going to talk about quantum mechanics. Maybe they corrected it later, but I didn’t even want to wade through to find out.
Although they do not “split off” in the same envisioned early on by DeWitt, there is definitely some unanswered questions here. Alastair Wilson and Simon Saunders has raised this issue. Are all the worlds in the wavefunction from the beginning of time or do they somehow spring out from one world? This is called overlap vs non-overlap (first discussed by David Lewis).
Since you are the expert, by all means answer this for us.
So, by “world” in this post I’ll mean “basis sate for the universe.” The basis is arbitrary, so what “world” means will still depend on how I’m choosing what “worlds” are—there’s the energy basis, for instance, where nothing ever changes if you look at just one of those “worlds.” But you can have animals or computers in your basis states if you want—they aren’t energy eigenstates, so they change with time.
Anyhow, currently the universe is spread out over a very wide variety of energy eigenstates, which is a fancy way of saying that lots of stuff changes. If we only allow quantum mechanics (that is, strictly follow MWI), this spread over “energy-worlds” is how the universe has been since the beginning of time. But if we look at the exact same state a different way, you could just call the initial state of the universe a basis state, and then, lo and behold, the universe would have sprung from one world, and the distribution of worlds then changed over time. This way of looking at things is probably pretty useful for cosmology. Or you could use worlds that change over time but don’t include the original state of the universe, giving you overlap again. This is what we do unintentionally when we choose worlds that have humans in them, which is also pretty useful :)
For overlap vs. non-overlap to get more complicated than “both are valid pictures,” you’d need some model where there weren’t any static worlds to talk about—this would be a change to QM though. Also, this does raise the interesting question of how complicated that initial world (if we look at it that way) was. It doesn’t have to be too complicated before we see interesting stuff.
Anyhow, it’s pretty likely I was too hasty in my mistake-detection. But meh, I rarely regret putting off reading things. And I only occasionally regret putting my foot in my mouth :)
To be perfectly honest, I do not see an answer to my question here.
You do explain some, but it seems that you end up indirectly stating that it is “semantics” whether the worlds overlap or not overlap. From what you say here it all depends on how you look at it, but that there is no “truth” of the matter. But that cannot be, either the worlds are overlapping or they are not. That is just the very fact of objective reality.
So while “both pictures are valid” in terms of math, not both can be the same. Metaphysically they are not the same and they got very different effects on episteomology. Also in terms of for instance quantum suicide. In overlap, it’s hard to argue against some sort of Quantum Immortality, whilst in non-overlap death is just as in a classical one world theory.
What I am saying is that if one person says “all the worlds have always existed” and another says “the worlds spread out from one world,” it’s possible that both of them are being consistent, but then they are using two different definitions of “world.” I am also saying that there is no basis that is “more real” than the others—only that some are more useful, and it’s okay that people use different definitions as long as they’re clear about it.
And yes, both pictures can describe the same thing. Have you worked with Bell states at all? Or am I misinterpreting your name and you actually haven’t taken a class on quantum mechanics before?
The quantum world is like a diagonal line. One person comes up to it and says “Ah! Here is a diagonal line! It has just as much horizontal as it does vertical, therefore it is a mixture between horizontal and vertical.” Another person comes up to it and says “Ah! Here is a diagonal line! It is a perfect rising diagonal, and is not even a little biased towards the falling diagonal.” Will these two people argue over whether the line is made of two components or one?
I understand what you are saying, which I think my last post showed quite clearly, but this still does not answer the actual question at hand. What you are saying really amounts to saying that “realism and solipsism are the same”, because we cannot really distinguish either through science, all we can do is use logic and metaphysical “reasoning”.
Obviously both overlap and non-overlap cannot be true, they are ontologically different, yet you seem to say that “because the equations doesn’t decide, reality isn’t decided” which is some sort of extreme positivism.
Have you read any of the papers that outline this? Alastair Wilson have written several: http://www.alastairwilson.org/
Maybe you’re just used to talking with people who are better at interpreting you, or people who are more similar to you. Clearly understandable to people you talk with every day isn’t always clearly understandable to me, as we’ve seen.
Could you explain this? Is this a metaphor, or are have you interpreted my statements about vectors to actually bear on realism vs. solipsism? Perhaps we have been talking about two different things.
Ah. See, this is the sort of thing I was trying to illustrate with the example of the diagonal line. A line being made of one component is ontologically different from a line being made of two components. Does this matter?
What happens if a one-componenter runs into a two-componenter? Do they argue? Does the first say “because of [insert convincing component-ist argument here], it’s ONE component!” Are there valid component-ist arguments? How can the two-componenter respond?
I think it would go more like this: the first one says “hey, if you describe lines in terms of plus and minus diagonals, this one is clearly just a plus diagonal, so why say it has two components?” And the second says “Oh, huh, you’re right. But there are lots of horizontal and vertical lines out there, so two-components is more useful.” And the first says “yeah, that makes sense, unless you were building a ramp or something.” “Well then, cheerio.” “Toodles.”
The reason this was so anticlimactic is because each participant could frame their ontology in a universal language (vectors!), and the ontologies were lossless transformations of each other—in this case the transformation was as simple as tilting your head. This clarity of the situation leaves no room for appeals to componentism. Arguments are for when both people are uncertain. When people know what’s going on, there’s simply a difference.
Could you point me to an example? Similar to how we are potentially talking about two different things, Alastair Wilson seemed to be talking about something other than physics in the papers I skimmed. The phrase “the most appropriate metaphysics to underwrite the semantics renders Everettian quantum mechanics a theory of non-overlapping worlds” exemplifies this for me.
Sure I can accept that I might have overestimated how well you should’ve been able to interpret my post.
Solipsism vs Realism is indeed a metaphor. If you are saying what I think you are saying, then it is quite equivalent.
I do not think that your example of a diagonal line is the same as overlap vs non-overlap at all. In overlap vs non-overlap the ontological differences matter. In a overlapping world, if you are shot, you are guaranteed to survive in another branch, so QI has to be true. In non-overlap, if you get shot, you just die. There is no consciousness that continue on in another branch that it was never connected to...
Also it makes away with the incoherence problem, which is HUGE if you are in the “Born Rule can be derived from decision-theoretic camp”.
It is metaphysics, I’ve already said this in the first post. There is no experiment that can ever distinguish either, just like no experiment can ever tell us if solipsism or realism is true. But obviously (assuming MWI is right) one of them are, only one, not both.
I think 5 of those papers are directly about non-overlap vs overlap, and I can’t remember which makes the point best right now, so read any of them you’d like. Or you can read Simon Saunders paper which was in a chapter of the Many Worlds? 2010 book here: http://users.ox.ac.uk/~lina0174/chance.pdf
Ah, I see. “Metaphysics.”
By which you mean “taking human morality and decision-making, which evolved in a classical world, and figuring out what decisions you should make in a quantum universe.”
Would you agree that overlap vs. non-overlap cannot be answered without looking inside humans, and in fact has little to do with the universe apart from a few postulates of quantum mechanics? For some reason I thought we were talking about the universe.
Anyhow, I think Shane Legg had a nice paper on porting utility functions, though of course humans are inconsistent and you immediately run into problems of how to idealize them. The basic idea being that you split up changes into “new things to care about” and “new ways to express old things.” Quantum suicide is probably the easiest thing to deal with via this method.
You have a theory—“quantum mechanics without wavefunction collapse”—in which the whole of reality is supposed to be equal to a single big object, the wavefunction of the universe. There are various mathematical facts about that object: the existence of various sets of basis functions, the dynamical process of decoherence, and so on.
Now a questioner says, “OK. You say that there are multiple copies of me inside the wavefunction. Is that because there is one of me that splits into many, or were there just parallel mes living separate but similar lives?” You’ve implied that the answer depends on the definition of something. Can you tell the questioner what definition of self leads to the different answers? So far you’ve used the example “| / > = | | > + | _ >”, which doesn’t tell anyone whether they should think of themselves as ”/”, as “|” and ” _ ”, or otherwise answer the question. It illustrates a mathematical fact about wavefunctions, not a fact about how to find yourself in them.
I do? Well, I can pretend I do, at least.
If we want to recover classical choices in cases where there are clear classical analogs, one of you splits into one. If you’d rather follow other intuitions, though, you’ll get different answers (see: quantum suicide).
Note that since humans aren’t energy eigenstates, there is no general way to get completely “parallel lives”—you always interfere. But because the world is nice and orderly you can get pretty dang close to parallel most of the time.
Well, it answers the person who asks “But is the line really one component, or is it really two components?” And that answer is that they’ve gotten their levels confused—number of components is in your description of the line, not in the line.
Which, to make sure I’m being clear, is analogous to how I interpreted Quantumental’s sentence “Obviously both overlap and non-overlap cannot be true, they are ontologically different.” If we go with a correspondence theory of truth, we run into a problem because there is no overlap or non-overlap out in quantum mechanics that this sentence could correspond to. Instead, the thing that would make it true or false is humans; specifically how they choose what’s right when presented with quantum mechanics. Unfortunately, humans are inconsistent, so you immediately run into the problem of how to idealize them.
I get it now. You’re saying that the relativism of how one may define one’s personal identity is so great that, in a quantum multiverse, even whether you are splitting into multiple selves or not is a matter of how you define yourself.
Still, that’s not the end of it, because then we can ask exactly what parts of the wavefunction are “potential person-parts”. I may have some freedom to choose whether a particular object, trait, thought, or state of mind that once existed or that could exist is “part of me”, but at some level there has to be an objective correspondence between “person-parts” and “wavefunction-parts”.
You may be a self-defining process, but the point of materialism is that this self-defining process is not something separate from the wavefunction which then freely chooses which parts of the wavefunction are going to count as “part of me”; the self-defining process is a part of the wavefunction, and the choosing about what to identify with, is just part of wavefunction dynamics. Eventually you have to ground the whole thing in physics rather than in cognition. Any thoughts on how that works?
If you ask two people, do these two people necessarily tell you the same correspondence between descriptions of matter and person-parts? You keep using that word “objective,” I do not think it means what you think it means :P
Sorry to be such a downer, but as a human my definition of anything complicated is imprecise and pretty inconsistent—if you ask me two different ways I can give you two different answers. I honestly do not know any particularly good ways to get definitions out of humans.
I guess one way is to stick to simple things—the “looking under the lamppost” approach. For example, the “computational me” who thinks some exact thought that I’m thinking is a better-defined idea than most. But on account of its simplicity it misses a lot of nuance in the human idea of “me,” and so it’s not actually very useful.
Nonetheless it’s important to attend to these “better-defined” parts of you, because that’s where we start to get away from the big distraction created by the freedom to self-define. This flexibility in the notion of self is mostly about what you get to include and exclude. So there’s a large collection of “potential self-parts”, but the potential self-parts themselves don’t exist just by definition; they are the actually existing raw material in terms of which a definition of self gets its meaning—these are a part of me, those are not. There has to be an objective account of what these “parts” are, in terms of the wavefunction ontology, and it ought to say unambiguously whether or not they “split”.
I’m not clear on what you mean by “self-parts” here, but I’m assuming you mean something like basis states that contain people like you, which you can describe people in terms of. In which case I’ve already trodden this ground—no such objective account must necessarily exist, but such things can be useful, though you still wouldn’t be able to get any two different peoples’ idealized algorithms to agree on the edge cases.
I don’t mean something that contains you, I mean something that you contain.
Ooh, a non-helpful one-sentence-off!
The wavefunction is not necessarily separable.
Something had better be separable, because all is not one.
So you see no objective facts about mwi? non-overlap vs overlap is nonsense in your opinion?
Yes, there are objective facts. Whether a waveunction is made of 2 components or 1 is still not independent of your perspective. No, it’s not necessarily nonsense. I am just claiming that the unsolved problems of stuff like “overlap” are not due to a lack of information about quantum mechanics, but due to a lack of information about very complicated things humans do. If it the difficulty of understanding how humans categorize things and revise categories gets attributed to basic quantum mechanics, then we may get some nonsense.
You say there are objective facts, yet you claim it depends on ones perspective...this is contradictory. Have you read any of Wilson’s papers? Or Saunders, Lawhead, Ismael etc.? All have written papers clearly indicating the OBJECTIVE difference.
What I am saying is that there are objective facts, but that a wavefunction being two components or one simply happens not to be one of those facts. It’s like “is this painting beautiful?” If you look closely enough at one person and make some idealizations, you can say objectively (well, plus idealizations) whether a painting is beautiful for that person, but what is thus beautiful for one person still doesn’t have to be beautiful for the next.
On the other hand, if you, say, explained Peano arithmetic to two different people and asked them whether some statement was a theorem or not (and made some idealizations), what is a theorem for one person is a theorem for the next. Or if you asked them to measure the space-time interval between two events. Or if you asked them about the various components of a wavefunction, given a certain basis.
He says that the math is simpler under MWI.
Can someone explain why that’s true (or false)?
I think the short version is that you don’t need math that covers the wavefunction collapse, because you don’t need the wave function to collapse.
For a longer version, you’d need someone who knows more QM than I do.
In non-relativistic MWI, the evolution of the quantum state is fully described by the Schrodinger equation. In most other interpretations, you need the Schrodinger equation plus some extra element. In Bohmian mechanics the extra element is the guidance equation, in GRW the extra element is a stochastic Gaussian “hit”.
In Copenhagen, the extra element is ostensibly the discontinuous wavefunction collapse process upon measurement, but to describe this as complicating the math (rather than the conceptual structure of the theory) is a bit misleading. Whether you’re working with Copenhagen or with MWI, you’re going to end up using pretty much the same math for making predictions. Although, technically MWI only relies on the Schrodinger equation, if you want to make useful predictions about your branch of the wave function, you’re going to have to treat the wave function as if it has collapsed (from a mathematical point of view). So the math isn’t simpler than Copenhagen in any practical sense, but it is true that from a purely theoretical point of view, MWI posits a simpler mathematical structure than Copenhagen.
In other words, MWI says: apply Copenhagen for anything useful.
MWI says that you apply no more than one collapse in every experiment, and you know why it is a collapse from your point of view. Copenhagen requires you to decide without guidance whether to apply collapse inside the experiment.
Yeah, just like statistical mechanics requires us to model systems as having infinite size in order to perform many useful calculations (e.g. phase transitions, understood as singularities in thermodynamic potentials, can only take place in infinite particle systems). It doesn’t follow that we should actually believe that these systems have infinite size.
Also, the claim is not that MWI is mathematically identical to Copenhagen, just that it works out that way in most practical cases. The Copenhagen interpretation is sufficiently ill-defined that it’s unclear what its mathematical structure actually is. But as Aaronson points out in the post, there are predictions that distinguish between MWI and Copenhagen.
I don’t believe that he said anything of the sort. At about 50min Scott talks about quantum speedup as utilizing the computational power of many worlds, provided they exist, not as any kind of experimental distinction (indeed, quantum computing is interpretation-agnostic).
I was talking about the blog post, not the bloggingheads video. He doesn’t outright declare that the two interpretations are distinguishable, but that position is strongly suggested by both his discussion of betting on the extension of linearity to macroscopic scales and his subsequent discussion of the Wigner’s friend experiment.
Hmm, if anything, the most interesting near-future experiment he mentioned is the one by Dirk Bouwmeester’s group. No one has the foggiest idea about how to construct the Wigner’s friend experiment, not even in principle, given that it is no different from the original (though non-lethal) Schrodinger cat experiment, where Wigner’s friend is the cat and Wigner is the observer.
Surely there’s a difference between thinking that experiments that can distinguish MWI and Copenhagen are infeasible for various technological reasons, and thinking that MWI and Copenhagen are empirically indistinguishable. I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
There are certain observables for which MWI and Copenhagen predict different expectation values, provided decoherence is contained. The problem is, we do not currently have much of an idea of how we could go about making the relevant measurements, mainly because we do not know how to keep systems as large as Wigner (or Schrodinger’s cat) informationally isolated for a sufficiently long period of time.
Yes, indeed. And it seems like there is a way to potentially falsify MWI, after all (see below). There is no way of falsifying the orthodox approach (“shut up and calculate, unless you can say something instrumentally useful”) as yet, because it does not treat collapse as “objective”, only as a calculational prescription (this is the part EY completely refuses to acknowledge, and instead goes on constructing and demolishing some objective collapse model). To falsify the orthodox approach one has to show that the Born rule is violated macroscopically, e.g. that you can see something other than a single eigenstate after a measurement, or that the measured probability of it is not the square amplitude.
Now, back to the experimental testing. If I understand it correctly, the quantum cantilever experiment of Bouwmeester, once performed, is likely to show one of two things:
Such a macroscopic object can be put into a superposition of two different spatial states, thus violating the decoherence limit proposed by Penrose. This will falsify his specific model of gravity-induced single world, and would thus be a reason to update toward MWI, though there is still no contradiction with the orthodox (unitary evolution+Born rule) prescription, unless the cantilever remains in the superposition of states after the measurement (not a chance in hell).
The cantilever remains in a single state, despite the predictions of gravity-less QM. This is by far a more interesting outcome, as it would for the first time show the macroscopic limits of the quantum world. This would score a point for gravity-influenced decoherence and single world, and would be a significant blow to MWI.
There is always a chance that the experiment will show something else entirely, which would be even more exciting.
That doesn’t sound right. Famously, matrix mechanics is “equivalent to the Schrödinger wave formulation”, and matrix mechanics doesn’t have multiple interpretations.
I view this whole subject as a colossal waste of time.
As you say, matrix mechanics (or the Heisenberg formulation) is equivalent to the Schrodinger formulation, so it has exactly the same range of interpretations as the Schrodinger formulation.
If you want a concrete example of an experiment that would distinguish between MWI and Copenhagen, here it is:
Prepare an electron so that its z-spin state is the superposition |up> + |down> (I’m dropping the coefficients for ease of typing). Have a research assistant enter an appropriately isolated chamber with the electron and measure its z spin. If Copenhagen is correct, this will lead to the collapse of the superposition, and the electron’s state will now be either |up> or |down>. If MWI is correct, the electron’s state will become entangled with your research assistant’s state, and the entire contents of the chamber will now be in one big superposition from your perspective.
Now have your research assistant record the state she measures by preparing another electron in that quantum state. So if she measures |up> she prepares the other electron in the state |up>. Again, if Copenhagen is correct, this new electron’s state is either |up> or |down>, whereas if MWI is correct, its state is in an entangled superposition with the original electron and the research assistant. Call this entangled state predicted by MWI psi.
Now you (from outside the chamber) directly measure the difference between the x-spin (not the z-spin) of electron 2 (the one prepared by your assistant) and the x-spin of electron 1. I can’t tell you off the top of my head how to operationalize this measurement, but the fact remains that it is a bona fide observable. If you do the math, it turns out that the entangled state psi is an eigenstate of this observable, with eigenvalue zero. So if MWI is right, whenever I make this measurement I should get the result zero. On the other hand, neither of the states predicted by Copenhagen are eigenstates of this observable, so if Copenhagen is right, if I keep repeating the experiment I will get a distribution of different results.
tl;dr: Basically, all I’ve done here is take advantage of the fact that there are observables that can distinguish between mixtures and superpositions by detecting interference effects.
Of course, in order for this experiment to be feasible, you need to make sure that the system consisting of the two electrons and the assistant doesn’t decohere until you make your measurement. With current technology, we’re not even close to making this happen, but that is a problem with the feasibility of the experiment, not its bare possibility.
You seem to conflate Copenhagen interpretation with objective collapse interpretations. Copenhagen doesn’t make any committment to the existence and nature of both the wavefunction and the collapse process: it says they are just mathematical descriptions useful to predict empirical observations. While Copenhagen interpretation has itself multiple interpretations, it is typically understood as the instrumentalist “shut up and calculate!”
The thought experiment you describe appears to be flawed. According to the principle of deferred measurement, in any quantum experiment you can always assume that measurement (that is, collapse) occours only once at the end of the experiment. Intermediate measurement operations can be replaced by unitary operations and all classical systems involved (automated devices, cats, people, …) are treated as fully quantum systems whose state can become entangled with the state of the “true” quantum system. This is a mathematical theorem of formal quantum mechanics, hence it holds in all interpretations (at least approximately, see below). You can’t use internal measurements to distinguish between interpretations, at least not as trivially as in your proposed experiment.
Objective collapse interpretations like Penrose’s predict that closed-system evolution becomes non-linear above a certain scale or in certain conditions, hence they are in principle distinguishable from the other interpretations. Testing would require preparing some specific kind of coherent superpositions of the state of large-scale quantum systems, keeping them significantly insulated from decoherence for a time long enough to make the nonlinearities non-negligible and then measuring. The results should deviate from the predictions of standard quantum mechanics.
It is true that the historical Copenhagen interpretation—the one developed by Bohr—is instrumentalist. But that’s no longer what people mean when they refer to the Copenhagen interpretation. Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment. Take GRW, for instance. It is a spontaneous collapse theory, where collapse is governed by a stochastic law. There is nothing in this law that prevents collapse from occurring midway through an experiment, or alternatively not occurring at any point in the experiment, not even the end.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
That is perhaps a common misconception in popular science publications aimed at non-technical audiences, but I’m not aware that it’s prevalent in technical literature. Even if it was, that’s not a good reason to further the misuse of terminology.
It doesn’t matter. All interpretations must agree with the predictions of the theory, at least in all the cases that have been practically testable so far. The experiment you proposed predicts the same results whether or not you shield the intermediate observer from decoherence. If your math predicts different results, then there must be some mistake in it.
Why wouldn’t it make sense of it?
MWI says: apply Born’s rule to get anything useful.
If that’s what you call Copenhagen, then sure they’re the same thing—but then why was Everett so scandalous and ridiculed? Something had to be different.
No idea, I don’t find MWI ridiculous, just not instrumentally useful, given that you still have to combine unitary evolution with the Born rule to get anything done. This is a philosophical difference with EY, who believes that territory is in the territory, not in the map.
… territory is in the territory.
Umm. That sounds… non-controversial. Did I read that wrong somehow?
No, you read it right. However, instrumentally, the map-territory relation is just a model, like any other, though somewhat more general. It postulates existence of some immutable objective reality with fixed laws, something to be studied (“mapped”). While this may appear self-evident to a realist, one ought to agree that it is still an assumption, however useful it might be. And it is indeed very useful: it explains why carefully set up experiments are repeatable, and assures you that they will continue to be. Thus it is easy to forget that it is impossible to verify that “territory exists independently of our models of it”, and go on arguing which of many experimentally indistinguishable territories is the real one. And once you do, behold the great “MWI vs Copehagen” LW debate. If you remember that territory is in the map, not in the territory, the debate is exposed as useless, until different models of the territory can be distinguished experimentally. Which will hopefully happen in the cantilever experiment.
The territory is not in the map, because that is nonsense.
That does not beg the question against instrumentalism and jn favour.of realism, because the territory does not have to exist at all.
Realists and anti realists are arguing about whether the territory exists, not where.
That’s the standard reaction here, yes. However “that is nonsense” is not a rational argument. You can present evidence to the contrary or point out a contradiction in reasoning. If you have either, feel free.
I don’t understand what you are saying here.
Maybe so, then I am neither.
I’ll point out a contradiction: territory is defined as not-map.
“I am neither”
… in the sense that you are using the word territory in a way that no one else does.
One can postulate that there is an and to a long stack of maps of maps which ends somewhere with a perfect absolute “correct” something. We call that the territory. I don’t postulate that.
This is one of those times it really is useful to pull out definitions… and for any reasonable definition of ‘territory’ and ‘map’, that’s self-evidently true. Our models, even if correct, are underdetermined to the point that they cannot completely explain everything. Therefore, there’s something else. That’s what we call the ‘territory’.
Whether the territory is vastly different from our models or simply more detailed, they do not coincide. And on the word ‘independent’ - well, the territory contains the map, so there’s no short-circuit if the territory has map dependence.
Again, that’s the realist approach. The minimum one can state is much less certain than that: all we know for certain is that carefully repeated experiments produce expected results. Period. Full stop. Why they produce expected results (e.g. because there is “something else” that you want to call the territory) is already a model. It’s a better model than, say, Boltzmann brains, but it is still a model. The instrumental approach is to consider all models giving the same predictions isomorphic, and, in particular, all experimentally indistinguishable territories isomorphic.
It’s on par with cogito, ergo sum. I don’t know everything, therefore something else exists. I don’t feel obliged to cater to people who are unwilling to go along with this.
No obligation on your part was implied. I only suggested tabooing the word “exist” and replacing it with what you mean by it. I bet that you will end up either with an equivalent term, or with something perception-related. So your choice is limited to postulating existence, including the existence of something that isn’t your thoughts (the definition of realism), or using it as as a synonym for territory in the map-territory model created by those thoughts. There are fewer assumptions in the latter, and nothing of interest is lost.
If not from Everett, I would expect from David Deutch to say: “You and I have a completely different sets of parallel worlds, for the Relativity sake. Every slightly different observer comes with his own Multiverse collection of parallel worlds.”
Those people should update to the GR, it’s about time.
Lets restate this philosophical problem as a problem of ontology
Imagine that you want to write a computer program that perfectly simulates what’s going on at the quantum level
Now the problem comes down to asking how many classes you need to define in your domain model
When you run your program will there be only one class of object instantiated (the wave class) or are there two different types of objects (of wave class and particle class) ?
The many worlds interpretation is equivalent to saying you only need to define one class in your model (wave class) because wave objects are all there are
Other interpretations are equivalent to saying you need to define at least two different classes (waves and particles) since both types of object can be instantiated and you also therefore need to define the interface showing the message passing between the two different types of object as per the rules of object oriented programming
When restating the problem in this way much confusion immediately clears
It should be obvious that the many worlds interpretation has much greater simplicity and clarity and that all other interpretations are in fact a return of dualism in disguise (with all the associated problem thereof). It is for that reason that many worlds wins hands down.
Just simulating the wave dynamics is not enough. You have to generate some further object from the waves, in order to get something in your simulation with the properties of reality. For example, you can repeatedly apply the Born rule as in Copenhagen to get a single stochastic history of particles, in which events occur with the appropriate frequencies. Or you could specify a deterministic rule for branching and joining, in which worlds are duplicated in different quantities at moments of branching in accordance with the Born rule, to create a deterministic multiverse in which events occur with the appropriate frequencies. Neither approach is very elegant; it’s simpler to suppose that the waves are an incomplete statistical-mechanical description of something more fundamental (which, because of Bell’s theorem, can’t be a locally deterministic system in any obvious way, though it might be a local determinism whose variables are then transformed nonlocally to give conventional space-time).
But MWI advocates (at least of the Oxford variety) claim that the properties of reality emerge from the wavefunction. No additional “beables” are required. I know you disagree, but I’m pretty sure that’s the sort of view Aaronson is referring to when he says MWI is mathematically simpler. The fundamental ontology is the wavefunction itself, not worlds of matter/energy whose multiplication is described by the wavefunction.
I certainly don’t think Scott belongs to the Oxford school. He’s probably just one of those people for whom the existence of probability-like numbers in the density matrix is enough. (The flaw of this perspective is that you need these numbers to appear in your ontology as the relative frequencies of something, because that’s what they are in reality.)
I was quite certain that Wallace et al (Oxfordians) dismissed pure WF realism in favour of state space realism when attempting to make it relativistic?
I’m assuming this whole conversation is about non-relativistic quantum mechanics.
But obviously reality is not about non-relativistic quantum mechanics. So whenever a discussion about interpretations is brought up, I think it is dishonest to argue FOR a partial version of it that really has nothing to do with reality
Fair enough. Unfortunately, the interpretive options for QFT are still not clearly worked out. I think the idea among quantum foundations people tends to be that we first figure out the best interpretation in the relatively simpler domain of NRQM, then think about how to adapt this interpretation to meet any new challenges from QFT.
This is no doubt partly due to the fact that the formal structure of NRQM is much better systematized and understood. We basically have a satisfactory axiomatization of NRQM, but attempted axiomatizations of QFT still have many lacunae. So there’s definitely a “looking for your keys under the streetlight even though you dropped them in the dark” thing going on here.
By all means! The Relativity complicates this MWI. We have different splits for different observers, since everything is not simultaneous for everyone.
Now what, if the future velocity of an observer is a result of a quantum experiment’s outcome. What’s very often, if not always!
MWI, the non-relativistic version is NOT real, anyway.
The thing that’s always bugged me about the MWI is that it doesn’t seem physically sensible. If something isn’t physically sensible, than you need to check on your model. This happens all the time in physics—there are so many basic problems where you discard solutions or throw out different terms because they don’t make sense. This is the path to successful understanding, rather than stubbornly sticking to your model and insisting that it must be correct.
The impression I get is that, if the math leads you to make a conclusion which seems like physical nonsense, then you ought to trust your gut, rather than trusting the math. MWI sounds like nonsense, and completely physically implausible, and that’s far more convincing to me than claims that “the math must make it so.”
By “physically sensible,” what do you mean? When I say that, I usually mean something that my brain is good at modeling,
In what sort of situation would you expect a correct theory to not be physically sensible?
It’s hard to put my finger on this exactly. To me, physically sensible just means it sounds reasonable under the context of observations and everything else that we know. In this specific case, the idea of infinitely many universe branches constantly forking off doesn’t seem physically sensible to me when all we observe is a single universe.
This just happens all the time. For example, to get the free-fall time for a falling object, you have to take a quadratic root of an expression, which in principle gives a “negative time” root/solution. This solution is obviously nonsense, so you just discard it and don’t pay attention to it, but you don’t conclude that the theory is wrong.
If you don’t discard it, and do pay attention to it, you discover it is sensible.
“Negative time” is time before the time you labelled as zero. The negative solution is the time at which the object would have been at the end point, moving upwards, to get to the starting point at time zero.
Well, the negative-time solution can be eliminated by using math too—“the theory” was never the equation with two roots—it was the process you used to get the right answer. What I want to know is, can you grok a case where the actual correct theory isn’t physically intuitive, but is correct?
First of all, I disagree that the negative time solution can be removed using math; the math will tell you that the solution is perfectly valid.
Secondly, yes, there are cases like in statistical mechanics or basic QM where the theory isn’t that intuitive, dealing with huge numbers of particles (as in SM) or dealing with position probabilities (as in QM), but where the process makes sense (I can grok it).
But these theories have clear interpretations in terms of observables; SM has a systematic justification in terms of physical intuition (in terms of the preferred configurations being those with the most probability, or something of that nature), and QM develops right from the beginning how the wave-function picture can be seen as a generalization of the classical picture (positions directly become position operators, as with momenta and so on). There’s no such obvious justification for the MWI, in my mind; the linkage between there being many branches of the solution, and there being many universes, is weakly justified at best.
But you could, say, write a computer program that gave you the right answer to classical mechanics problems, right? In order to write this program, the knowledge you have that tells you that when you want a length of time, you want a positive number would have be translated into “computer language,” i.e. math.
That is, when I say “you can remove nonsense solutions by using math” I mean “all you have to do is make the theory already contain your knowledge of what’s a nonsense solution.”
Well, the negative-time solution can be eliminated by using math too—“the theory” was never the equation with two roots—it was the process you used to get the right answer. What I want to know is, can you grok a case where the actual correct theory isn’t physically intuitive, but is correct?