The many worlds interpretation is not just splitting so that it can be deterministic.
Imagine in your example that someone wanted to know if it was random or splitting. They point a telescope through the fifth dimension, and see an identical universe, except with the opposite event. Someone suggests that maybe it’s mostly probabilistic, but there’s a split at the end. The person then manages to look closely enough to distinguish the next twenty or so universes. They’re told that it just branches further than they thought before.
This is what it is with quantum physics. We know there’s multiple universes because we can test for them. Any quantum theory requires a configuration space for all the relevant particles. The only question is if it has one giant configuration space for the whole universe, or has little configuration spaces, which otherwise already follow all the laws of the giant one, that randomly split and combine.
The many worlds interpretation is not just splitting so that it can be deterministic.
My motivation for suggesting that “MWI is what you get when you mistakenly try to conceive of a quantum theory as deterministic” is the following:
First, imagine we have a simple deterministic universe like the Life universe. Forget about quantum theory for now, and suppose that we’re going to build a simulation of a “Coin universe” within the Life universe. Suppose we decide to do it by simulating all branches at once, and ‘tagging them’ somehow to indicate their relative probabilities. Then the “tags” will be epiphenomenal, and ‘from the inside’ the beings will experience a universe where “Theory 1” is true. In other words, the probabilities we assign won’t affect the experiences of the simulated beings.
Now, I want to say that this branching simulation is “what you get when you mistakenly try to model the coin universe as a deterministic universe”.
OK, now let’s replace the coin universe with a universe where quantum mechanics is the ‘theory of everything’. Now we could simulate it within the Life universe by deterministically modelling the entire wavefunction, and that might even be the only way of doing so, but it isn’t clear to me that this wouldn’t cause some or all of the information about probabilities to become “epiphenomenal” in the same way as before. As Steane says:
[W]hen a classical computer simulates the action of a quantum
computer, it may need exponentially more time steps or physical components,
but then it also yields exponentially more information about the final state [than one can observe].
Hanson’s ingenious concept of “Mangled Worlds” might be exactly what I need to reassure myself that a deterministic simulation of the entire wavefunction would ‘feel the same from the inside’ as a genuine quantum universe. Armok_Gob was right to mention it. But then I’m just an “interested layperson”, and no physicists or philosophers of physics besides Hanson himself ever seem to mention Mangled Worlds, so I’m not quite sure what to make of it.
They point a telescope through the fifth dimension, and see an identical universe, except with the opposite event.
That’s not really analogous. What makes distinct MWI branches distinct is that they have ‘decohered’ and can no longer interact in any detectable way. [Disclaimer: I know this isn’t an ‘absolute’ notion—that the ‘off-diagonal elements’ don’t vanish entirely.]
Now, quantum interference can be illustrated with an experimental apparatus like figure one where to show that photons always exit the same way you need to take into account all of the possible paths a photon could take. However, since the experiment can only end one way, it all takes place within one “branch”. (There is no decoherence.) The fact that the different photon paths interfere with each other means that they’re not in “different worlds”.
(Many worlds doesn’t “explain” quantum interference, in spite of what Deutsch might have you believe. I don’t think that was ever its “purpose”, to be fair.)
Now, in principle you could have macroscopic superpositions—e.g. you could do a two-slit experiment with people rather than electrons. But it’s better to say that the concept of “other worlds” breaks down if you push it too far, than to say we can thereby detect “other worlds”.
Anyway, this is all rather confusing and confused—I haven’t really worked out in my own mind if/how the “branching vs probabilities” discussion relates to MWI.
That’s not really analogous. What makes distinct MWI branches distinct is that they have ‘decohered’ and can no longer interact in any detectable way.
If the MWI branches are “close” and haven’t completely decohered, it’s possible to detect them. If they’re far away, it’s not. Similarly, if the universes are close by in the fifth dimension, you might be able to make them out. If they’re far away, and you have to look through a hundred universes to make it out, it’s essentially impossible. The method of detecting them is different, the the principle is the same.
However, since the experiment can only end one way, it all takes place within one “branch”.
You can detect it, but it doesn’t happen? Isn’t that like saying that the universe doesn’t exist, but we experience things as if it did?
You need to know the potential energy of every point in configuration space in order to find out the probability of a given event. How can it matter if it isn’t involved?
(Many worlds doesn’t “explain” quantum interference, in spite of what Deutsch might have you believe. I don’t think that was ever its “purpose”, to be fair.)
I don’t understand. It explains it. So does any interpretation beyond pure particle. Its purpose is to explain away waveform collapse and the process of particles getting entangled. The laws regarding those in the Copenhagen interpretation are bizarre, and the results are indistinguishable from just assuming that everything is always entangled, and waveforms never collapse, which is the MWI.
But it’s better to say that the concept of “other worlds” breaks down if you push it too far, than to say we can thereby detect “other worlds”.
We can detect other worlds of they’re close enough, but not if they’re too far away. This isn’t just limited to the MWI. The Copenhagen interpretation follows the same laws with entangled particles. We’ve never been able to detect waveform collapse, so decoherence getting to the point where we can’t detect the interference must happen first.
This is no different than saying that the next twenty branches exist, and maybe a few hundred more, but after a billion, the concept of other branches breaks down.
It’s also like saying that Earth is made of atoms, and the rest of our solar system is made of atoms, but we aren’t remotely capable of discerning atoms in other solar systems, so the concept of “being made of atoms” broke down.
This has largely turned into a semantic dispute about the “correct” meaning of the term “world” in the context of MWI.
You’re using it to mean “summand with respect to the position basis” whereas I’m using it to mean “summand with respect to a decomposition of the Hilbert space into subspaces large enough that elements of two distinct subspaces represent ‘distinct macrostates’”. (Where “macroscopic distinctness” is not and does not pretend to be precisely definable.)
Right after the photon in the Mach-Zehnder apparatus splits, you see two worlds corresponding to the two different positions of the photon, whereas I see only a single world because all macroscopic variables still have determinate values. (Or rather, their values are still as close to being determinate as they ever are.)
In my use of the term “worlds” it is correct to say that the notion of “other worlds” breaks down if you push it too far (ultimately this is because the boundary between the “micro” and “macro” domains cannot be rigorously defined.) In your use of the term “worlds” it is trivially true that, at any given time, the state vector is uniquely expressible as a superposition of “worlds”.
I don’t want to say too much in defense of my usage, except that I think mine is the standard one. You might like to read this by the way. (Not to resolve our dispute, but because it’s awesome.)
However, since the experiment can only end one way, it all takes place within one “branch”.
You can detect it, but it doesn’t happen? Isn’t that like saying that the universe doesn’t exist, but we experience things as if it did?
Sorry, I can’t see how your questions relate to my statement.
I don’t understand. It explains it.
The reason I say it doesn’t explain it is that the notion of “constructive and destructive interference” between different possibilities is deeply bizarre. Simply declaring that all possibilities exist doesn’t explain why two possibilities can cancel each other out. But again, I suspect this is partly just a dispute over the semantics of “explain”.
ETA: I have to acknowledge a bait-and-switch on my part. Whereas in my previous comment I was seeking to characterise worlds directly in terms of decoherence, now I’m characterizing them by way of a third concept, namely “macroscopic distinctness”, which “under normal circumstances (i.e. not doing a two-slit experiment with people)” guarantees decoherence.
Sorry, I can’t see how your questions relate to my statement.
It was a misunderstanding you cleared up by specifying what you meant by “world”.
The reason I say it doesn’t explain it is that the notion of “constructive and destructive interference” between different possibilities is deeply bizarre.
The interference isn’t between probabilities. They don’t contain sufficient information. It’s between the amplitudes. Going from amplitudes to probabilities is the weird part. It’s not explained by any interpretation.
The many worlds interpretation is not just splitting so that it can be deterministic.
My motivation for suggesting that “MWI is what you get when you mistakenly try to conceive of a quantum theory as deterministic” is the following:
First, imagine we have a simple deterministic universe like the Life universe. Forget about quantum theory for now, and suppose that we’re going to build a simulation of a “Coin universe” within the Life universe. Suppose we decide to do it by simulating all branches at once, and ‘tagging them’ somehow to indicate their relative probabilities. Then the “tags” will be epiphenomenal, and ‘from the inside’ the beings will experience a universe where “Theory 1” is true. In other words, the probabilities we assign won’t affect the experiences of the simulated beings.
Now, I want to say that this branching simulation is “what you get when you mistakenly try to model the coin universe as a deterministic universe”.
OK, now let’s replace the coin universe with a universe where quantum mechanics is the ‘theory of everything’. Now we could simulate it within the Life universe by deterministically modelling the entire wavefunction, and that might even be the only way of doing so, but it isn’t clear to me that this wouldn’t cause some or all of the information about probabilities to become “epiphenomenal” in the same way as before. As Steane says:
[W]hen a classical computer simulates the action of a quantum
computer, it may need exponentially more time steps or physical components,
but then it also yields exponentially more information about the final state [than one can observe].
Hanson’s ingenious concept of “Mangled Worlds” might be exactly what I need to reassure myself that a deterministic simulation of the entire wavefunction would ‘feel the same from the inside’ as a genuine quantum universe. Armok_Gob was right to mention it. But then I’m just an “interested layperson”, and no physicists or philosophers of physics besides Hanson himself ever seem to mention Mangled Worlds, so I’m not quite sure what to make of it.
If you want to know how to tell the difference between the MWI of quantum mechanics and a single branch theory, there’s no experiment I can give, because there’s no such thing as a single branch theory.
The Schroedinger equation gives the behavior of a single particle in a potential field. If you want to model two particles, you have to use the configuration space.
They still involve the whole MWI, just on a smaller scale.
The difference between randomly choosing when scale gets to a certain point and splitting is that splitting isn’t total. There’s still some interference between any two universes no matter how far apart they are. It’s just that when there’s macroscopic differences, the interference is astronomically small.
The many worlds interpretation is not just splitting so that it can be deterministic.
Imagine in your example that someone wanted to know if it was random or splitting. They point a telescope through the fifth dimension, and see an identical universe, except with the opposite event. Someone suggests that maybe it’s mostly probabilistic, but there’s a split at the end. The person then manages to look closely enough to distinguish the next twenty or so universes. They’re told that it just branches further than they thought before.
This is what it is with quantum physics. We know there’s multiple universes because we can test for them. Any quantum theory requires a configuration space for all the relevant particles. The only question is if it has one giant configuration space for the whole universe, or has little configuration spaces, which otherwise already follow all the laws of the giant one, that randomly split and combine.
My motivation for suggesting that “MWI is what you get when you mistakenly try to conceive of a quantum theory as deterministic” is the following:
First, imagine we have a simple deterministic universe like the Life universe. Forget about quantum theory for now, and suppose that we’re going to build a simulation of a “Coin universe” within the Life universe. Suppose we decide to do it by simulating all branches at once, and ‘tagging them’ somehow to indicate their relative probabilities. Then the “tags” will be epiphenomenal, and ‘from the inside’ the beings will experience a universe where “Theory 1” is true. In other words, the probabilities we assign won’t affect the experiences of the simulated beings.
Now, I want to say that this branching simulation is “what you get when you mistakenly try to model the coin universe as a deterministic universe”.
OK, now let’s replace the coin universe with a universe where quantum mechanics is the ‘theory of everything’. Now we could simulate it within the Life universe by deterministically modelling the entire wavefunction, and that might even be the only way of doing so, but it isn’t clear to me that this wouldn’t cause some or all of the information about probabilities to become “epiphenomenal” in the same way as before. As Steane says:
Hanson’s ingenious concept of “Mangled Worlds” might be exactly what I need to reassure myself that a deterministic simulation of the entire wavefunction would ‘feel the same from the inside’ as a genuine quantum universe. Armok_Gob was right to mention it. But then I’m just an “interested layperson”, and no physicists or philosophers of physics besides Hanson himself ever seem to mention Mangled Worlds, so I’m not quite sure what to make of it.
That’s not really analogous. What makes distinct MWI branches distinct is that they have ‘decohered’ and can no longer interact in any detectable way. [Disclaimer: I know this isn’t an ‘absolute’ notion—that the ‘off-diagonal elements’ don’t vanish entirely.]
Now, quantum interference can be illustrated with an experimental apparatus like figure one where to show that photons always exit the same way you need to take into account all of the possible paths a photon could take. However, since the experiment can only end one way, it all takes place within one “branch”. (There is no decoherence.) The fact that the different photon paths interfere with each other means that they’re not in “different worlds”.
(Many worlds doesn’t “explain” quantum interference, in spite of what Deutsch might have you believe. I don’t think that was ever its “purpose”, to be fair.)
Now, in principle you could have macroscopic superpositions—e.g. you could do a two-slit experiment with people rather than electrons. But it’s better to say that the concept of “other worlds” breaks down if you push it too far, than to say we can thereby detect “other worlds”.
Anyway, this is all rather confusing and confused—I haven’t really worked out in my own mind if/how the “branching vs probabilities” discussion relates to MWI.
If the MWI branches are “close” and haven’t completely decohered, it’s possible to detect them. If they’re far away, it’s not. Similarly, if the universes are close by in the fifth dimension, you might be able to make them out. If they’re far away, and you have to look through a hundred universes to make it out, it’s essentially impossible. The method of detecting them is different, the the principle is the same.
You can detect it, but it doesn’t happen? Isn’t that like saying that the universe doesn’t exist, but we experience things as if it did?
You need to know the potential energy of every point in configuration space in order to find out the probability of a given event. How can it matter if it isn’t involved?
I don’t understand. It explains it. So does any interpretation beyond pure particle. Its purpose is to explain away waveform collapse and the process of particles getting entangled. The laws regarding those in the Copenhagen interpretation are bizarre, and the results are indistinguishable from just assuming that everything is always entangled, and waveforms never collapse, which is the MWI.
We can detect other worlds of they’re close enough, but not if they’re too far away. This isn’t just limited to the MWI. The Copenhagen interpretation follows the same laws with entangled particles. We’ve never been able to detect waveform collapse, so decoherence getting to the point where we can’t detect the interference must happen first.
This is no different than saying that the next twenty branches exist, and maybe a few hundred more, but after a billion, the concept of other branches breaks down.
It’s also like saying that Earth is made of atoms, and the rest of our solar system is made of atoms, but we aren’t remotely capable of discerning atoms in other solar systems, so the concept of “being made of atoms” broke down.
This has largely turned into a semantic dispute about the “correct” meaning of the term “world” in the context of MWI.
You’re using it to mean “summand with respect to the position basis” whereas I’m using it to mean “summand with respect to a decomposition of the Hilbert space into subspaces large enough that elements of two distinct subspaces represent ‘distinct macrostates’”. (Where “macroscopic distinctness” is not and does not pretend to be precisely definable.)
Right after the photon in the Mach-Zehnder apparatus splits, you see two worlds corresponding to the two different positions of the photon, whereas I see only a single world because all macroscopic variables still have determinate values. (Or rather, their values are still as close to being determinate as they ever are.)
In my use of the term “worlds” it is correct to say that the notion of “other worlds” breaks down if you push it too far (ultimately this is because the boundary between the “micro” and “macro” domains cannot be rigorously defined.) In your use of the term “worlds” it is trivially true that, at any given time, the state vector is uniquely expressible as a superposition of “worlds”.
I don’t want to say too much in defense of my usage, except that I think mine is the standard one. You might like to read this by the way. (Not to resolve our dispute, but because it’s awesome.)
Sorry, I can’t see how your questions relate to my statement.
The reason I say it doesn’t explain it is that the notion of “constructive and destructive interference” between different possibilities is deeply bizarre. Simply declaring that all possibilities exist doesn’t explain why two possibilities can cancel each other out. But again, I suspect this is partly just a dispute over the semantics of “explain”.
ETA: I have to acknowledge a bait-and-switch on my part. Whereas in my previous comment I was seeking to characterise worlds directly in terms of decoherence, now I’m characterizing them by way of a third concept, namely “macroscopic distinctness”, which “under normal circumstances (i.e. not doing a two-slit experiment with people)” guarantees decoherence.
It was a misunderstanding you cleared up by specifying what you meant by “world”.
The interference isn’t between probabilities. They don’t contain sufficient information. It’s between the amplitudes. Going from amplitudes to probabilities is the weird part. It’s not explained by any interpretation.
Good thing I didn’t say that, then!
Above, I said of MWI “I don’t think that was ever its “purpose”.”
My motivation for suggesting that “MWI is what you get when you mistakenly try to conceive of a quantum theory as deterministic” is the following:
First, imagine we have a simple deterministic universe like the Life universe. Forget about quantum theory for now, and suppose that we’re going to build a simulation of a “Coin universe” within the Life universe. Suppose we decide to do it by simulating all branches at once, and ‘tagging them’ somehow to indicate their relative probabilities. Then the “tags” will be epiphenomenal, and ‘from the inside’ the beings will experience a universe where “Theory 1” is true. In other words, the probabilities we assign won’t affect the experiences of the simulated beings.
Now, I want to say that this branching simulation is “what you get when you mistakenly try to model the coin universe as a deterministic universe”.
OK, now let’s replace the coin universe with a universe where quantum mechanics is the ‘theory of everything’. Now we could simulate it within the Life universe by deterministically modelling the entire wavefunction, and that might even be the only way of doing so, but it isn’t clear to me that this wouldn’t cause some or all of the information about probabilities to become “epiphenomenal” in the same way as before. As Steane says:
Hanson’s ingenious concept of “Mangled Worlds” might be exactly what I need to reassure myself that a deterministic simulation of the entire wavefunction would ‘feel the same from the inside’ as a genuine quantum universe. Armok_Gob was right to mention it. But then I’m just an “interested layperson”, and no physicists or philosophers of physics besides Hanson himself ever seem to mention Mangled Worlds, so I’m not quite sure what to make of it.
How do you tell the difference between those from inside a branch?
The difference between what?
If you want to know how to tell the difference between the MWI of quantum mechanics and a single branch theory, there’s no experiment I can give, because there’s no such thing as a single branch theory.
The Schroedinger equation gives the behavior of a single particle in a potential field. If you want to model two particles, you have to use the configuration space.
Random and Splitting.
Though, I just realized that I should just google around and look for papers on the subject.
Collapse interpretation, IIRC Bohmian interpretation, unreal MWI, etc.
They still involve the whole MWI, just on a smaller scale.
The difference between randomly choosing when scale gets to a certain point and splitting is that splitting isn’t total. There’s still some interference between any two universes no matter how far apart they are. It’s just that when there’s macroscopic differences, the interference is astronomically small.