But when we humans look at the sensor, it only seems to say “LEFT” or “RIGHT”, never a mixture like “LIGFT”. This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees “LEFT”) + (particle right, sensor measures RIGHT, human sees “RIGHT”).
If there are two nearly identical copies of me in the same place, why is there no further interaction between them, resulting in my seeing “LIGFT”? (Well, now that I think of it, I do see “LIGFT”, if only because EY wrote it.) Yes, I know, the magical password is “decoherence”. How helpful.
Shminux, I trust you do know the actual answer to this, based on your demonstrated knowledge of QM. The essay does a qualitative job of this, here:
There are no plausible Feynman paths that end up with both LEFT and RIGHT sending amplitude to the same joint configuration. There would have to be a Feynman path from LEFT, and a Feynman path from RIGHT, in which all the quadrillions of differentiated particles ended up in the same places. So the amplitude flows from LEFT and RIGHT don’t intersect, and don’t interfere.
In order for the joint observer-observed system to be coherent, the two cases need to be reconcilable.
How is this a magical password? He pulls out the guts of decoherence and shows them to the reader!
If there are two nearly identical copies of me in the same place, why is there no further interaction between them
Your two copies differ by states of many neurons, that’s billions of particles. They are not “nearly identical”.
It is tempting to think about “one different thought” or “one different perception” as very small changes. But on particle level those are huge changes. A small change on a particle level is something you can’t notice, and therefore you can’t notice as those copies of you interact… and when the small change becomes big enough, your copies are already decoherent.
By hypothesis, Sensor-LEFT is a different state from Sensor-RIGHT—otherwise it wouldn’t be a very sensitive Sensor. So the final state doesn’t factorize any further; it’s entangled.
But this entanglement is not likely to manifest in difficulties of calculation. Suppose the Sensor has a little LCD screen that’s flashing “LEFT” or “RIGHT”. This may seem like a relatively small difference to a human, but it involves avogadros of particles—photons, electrons, entire molecules—occupying different positions.
So, since the states Sensor-LEFT and Sensor-RIGHT are widely separated in the configuration space, the volumes (Sensor-LEFT Atom-LEFT) and (Sensor-RIGHT Atom-RIGHT) are even more widely separated.
The question still left in my mind is what is meant by “widely separated”, and why states that are widely separated have volumes that are widely separated.
For example, take a chaotic system evolving from an initial state. (Perhaps an energetic particle in a varying potential field.) After evolving, the probability that was concentrated at that initial state flows out to encompass a rather large region of configuration space. Presumably in this case the end states can be widely separated, but the probability volumes are not.
What does ‘widely separated’ mean? I suspect that this can be defined without recourse to a full detailed treatment of decoherence. Let’s give that a try. (I’m going to feel free to edit this until someone responds, since I’m kind of thinking out loud).
The obvious but wrong answer is that given two initial components |a> and |b>, the measurement process produces consequences such that U|a> is orthogonal to U|b>.. of course, that’s trivially true since = = 0. Even if they were overlapping everywhere, the unitary process of time evolution would make their overlap integral keep canceling out. And meanwhile they would be interfering with each other—not independent at all.
What we need is for the random phase approximation to become applicable. If we are able to consider a local system, it can be by exchanging a particle with the outside. The applicability of this approach to a single universal wavefunction is not clear. We will need to be able to speak of information loss and dissipation in unitary language.
I had another flawed but more promising notion, that one could get somewhere by considering a second split after a first. You have two potentially decoherent vectors |a> and |b>, with = 0; then you split |b> = |c> + |d> such that = 0. The idea was that |a> and |b> are ‘widely separated’ if any choice of |c> and |d> will have = = 0… except that you can always choose some crazy superposed |c> that explicitly overlaps |a> and |b>.
Based on this, I thought instead about an operator that takes macro-scale measurements, like ‘is that screen reading X’. Then you can require that |c> and |d> each are in the same kernel of each these operators as |b> is. That might be sufficient even without splitting |b> - as long as you can construct a macro-scale measurement that indicates |b> instead of |a>, then they’re going to be distinguishable by that, so they won’t interfere. But that in itself doesn’t prove that you can’t smash the computer and get them to interfere again.
Of course, all that puts it backwards, focusing on how you could possibly establish a perfectly ordinary decoherent state, rather than focusing on how you maintain an utterly abnormal coherent state (this is the approach the sequence suggests).
You need to be able to split of a subspace of the Hilbert space such that the ‘outside’ is completely independent of the ‘inside’. Nearly completely causally independent, at least on some time domain. For example, in an interferometer, all the rest of the universe depends on is that the inside of the interferometer is only doing interferometry, not, say, exploding. If there were such a dependence (and it was a true dependence such that the various outcomes actually produced a different effect), then the joint configurations rule would kick in, and the subspace could not interfere because of the different effects on the outside.
The problem here is, I do not know of any mathematical language for expressing causal dependence in quantum mechanics. If there is one, this is a very brief statement in it.
There is a (very brief) account in that post of what decoherence is and why it leads to non-interaction. There is a much more extensive discussion of the point in the previous (linked) post on decoherence.
You do realise that the one-paragraph summary here is only a one-paragraph summary, and that there’s a lot more in the original post, yes?
Just because you call it hand-waving doesn’t make it so. There really is no plausible method by which a measurement process resulting in that LCD screen being read is undone, resulting in a state universally identical to one in which the opposite measurement had been made.
I don’t disagree with your second sentence. Regarding the first, I don’t think there’s really any argument about whether or not it’s handwaving. The question is whether or not it’s justified handwaving in the pursuit of a pseudo-rigorous understanding of quantum mechanics.
I’m comfortable with him saying that time evolution is linear, because there are intuitive reasons for it to be so, and he presents those reasons elsewhere.
I’m less comfortable with the use of them in this article. Take the following quote:
There are no plausible Feynman paths that end up with both LEFT and RIGHT sending amplitude to the same joint configuration. There would have to be a Feynman path from LEFT, and a Feynman path from RIGHT, in which all the quadrillions of differentiated particles ended up in the same places. So the amplitude flows from LEFT and RIGHT don’t intersect, and don’t interfere.
It’s really hard to make sense of this given the way Feynman paths are treated earlier. I can make sense of it if I rely on what traditional training I’ve had in quantum mechanics, but not everyone has that background.
‘Handwaving’ describes vagueness. Yet, just how much vagueness qualifies as ‘handwaving’ is not well-defined!
This builds on the result of ‘joint configurations’, which is that for interference to occur, everything needs to line up. EVERYTHING. Otherwise, it’s offset in some dimension or other, and not really in the same ‘place’ at all. With that in place, this is a short step to take.
‘Handwaving’ describes vagueness. Yet, just how much vagueness qualifies as ‘handwaving’ is not well-defined!
I don’t disagree? I’m making essentially an aesthetic point.
I thought I qualified how much vagueness was acceptable—there is vagueness that is pedagogically useful, and there is vagueness that is not pedagogically useful, and my accusation of handwaving is isomorphic to saying that the vagueness with Feynman paths here is not pedagogically useful.
This builds on the result of ‘joint configurations’, which is that for interference to occur, everything needs to line up. EVERYTHING. Otherwise, it’s offset in some dimension or other, and not really in the same ‘place’ at all. With that in place, this is a short step to take.
I can’t follow this explanation at all. Too many ambiguous pronouns. But this is okay; the goal isn’t to explain it to me—I have all the training in quantum mechanics that I care to have.
“Everything needs to line up” is the key point, and it once you understand it it’s really quite simple. It just means that there is more than one way to get to the same configuration state. Think about history seeming to branch out in a tree-like way, as most people tend to imagine. But if two branching paths are not far apart (e.g. differing by just a single photon) then it is easy for then to come back together. History changes from a tree to a graph. Being a graph means that some point has two history paths (actually every point has an infinite amount of ancestry but most of it cancels out). When you more than one history path both constructive and destructive interference can take place, and destructive means that the probability of some states goes down, i.e. some final states no longer happen (you no longer see a photon appearing in some places).
Is this making it clearer or have I made it worse? ;-)
Well, true, a graph implies a discreteness that does not correlate closely to a continuous configuration space. I actually think of it as the probability of finding yourself in that volume of configuration space being influenced by “significant” amplitudes slowing from more than one other volume of configuration space, although even that is not a great explanation as it suggests a ticking of a discrete time parameter. A continuously propagating wavefront is probably a much better analogy. Or we can just go into calculus mode and consider boxes of configuration space which we then shrink down arbitrarily while taking a limit value. But sometimes it’s just easier to think “branches” ;-)
Nobody seems to think EY’s exposition is an issue, and you’re the second person who’s tried—and I can’t understand the motivation for this—to explain the underlying QM to me in vague metaphors that neither reflect the underlying theory nor present a pedagogical simplification.
But it does reflect the underlying theory (though it does take special cases and simplifies), and it does present a pedagogical simplification (because it’s a hell of a lot easier than solving huge quantum systems. Heck, it’s not even a metaphor. A DAG is blank enough—has few enough intrinsic properties—to be an incomplete model instead of a metaphor.
Does anything other than a fully quantum description of a system using only an interacting-particle hamiltonian with no externally applied fields count as a non-vague non-metaphor?
If there are two nearly identical copies of me in the same place, why is there no further interaction between them, resulting in my seeing “LIGFT”? (Well, now that I think of it, I do see “LIGFT”, if only because EY wrote it.) Yes, I know, the magical password is “decoherence”. How helpful.
Shminux, I trust you do know the actual answer to this, based on your demonstrated knowledge of QM. The essay does a qualitative job of this, here:
In order for the joint observer-observed system to be coherent, the two cases need to be reconcilable.
How is this a magical password? He pulls out the guts of decoherence and shows them to the reader!
Well, part of the guts. He’s given a sufficient but not necessary criterion for decoherence.
Your two copies differ by states of many neurons, that’s billions of particles. They are not “nearly identical”.
It is tempting to think about “one different thought” or “one different perception” as very small changes. But on particle level those are huge changes. A small change on a particle level is something you can’t notice, and therefore you can’t notice as those copies of you interact… and when the small change becomes big enough, your copies are already decoherent.
It seems like the relevant section is
The question still left in my mind is what is meant by “widely separated”, and why states that are widely separated have volumes that are widely separated.
For example, take a chaotic system evolving from an initial state. (Perhaps an energetic particle in a varying potential field.) After evolving, the probability that was concentrated at that initial state flows out to encompass a rather large region of configuration space. Presumably in this case the end states can be widely separated, but the probability volumes are not.
What does ‘widely separated’ mean? I suspect that this can be defined without recourse to a full detailed treatment of decoherence. Let’s give that a try. (I’m going to feel free to edit this until someone responds, since I’m kind of thinking out loud).
The obvious but wrong answer is that given two initial components |a> and |b>, the measurement process produces consequences such that U|a> is orthogonal to U|b>.. of course, that’s trivially true since = = 0. Even if they were overlapping everywhere, the unitary process of time evolution would make their overlap integral keep canceling out. And meanwhile they would be interfering with each other—not independent at all.
What we need is for the random phase approximation to become applicable. If we are able to consider a local system, it can be by exchanging a particle with the outside. The applicability of this approach to a single universal wavefunction is not clear. We will need to be able to speak of information loss and dissipation in unitary language.
I had another flawed but more promising notion, that one could get somewhere by considering a second split after a first. You have two potentially decoherent vectors |a> and |b>, with = 0; then you split |b> = |c> + |d> such that = 0. The idea was that |a> and |b> are ‘widely separated’ if any choice of |c> and |d> will have = = 0… except that you can always choose some crazy superposed |c> that explicitly overlaps |a> and |b>.
Based on this, I thought instead about an operator that takes macro-scale measurements, like ‘is that screen reading X’. Then you can require that |c> and |d> each are in the same kernel of each these operators as |b> is. That might be sufficient even without splitting |b> - as long as you can construct a macro-scale measurement that indicates |b> instead of |a>, then they’re going to be distinguishable by that, so they won’t interfere. But that in itself doesn’t prove that you can’t smash the computer and get them to interfere again.
Of course, all that puts it backwards, focusing on how you could possibly establish a perfectly ordinary decoherent state, rather than focusing on how you maintain an utterly abnormal coherent state (this is the approach the sequence suggests).
You need to be able to split of a subspace of the Hilbert space such that the ‘outside’ is completely independent of the ‘inside’. Nearly completely causally independent, at least on some time domain. For example, in an interferometer, all the rest of the universe depends on is that the inside of the interferometer is only doing interferometry, not, say, exploding. If there were such a dependence (and it was a true dependence such that the various outcomes actually produced a different effect), then the joint configurations rule would kick in, and the subspace could not interfere because of the different effects on the outside.
The problem here is, I do not know of any mathematical language for expressing causal dependence in quantum mechanics. If there is one, this is a very brief statement in it.
There is a (very brief) account in that post of what decoherence is and why it leads to non-interaction. There is a much more extensive discussion of the point in the previous (linked) post on decoherence.
You do realise that the one-paragraph summary here is only a one-paragraph summary, and that there’s a lot more in the original post, yes?
There’s not that much more.
There’s some claims about how the system evolves, and some more handwaving with Feynman path integrals.
Just because you call it hand-waving doesn’t make it so. There really is no plausible method by which a measurement process resulting in that LCD screen being read is undone, resulting in a state universally identical to one in which the opposite measurement had been made.
I don’t disagree with your second sentence. Regarding the first, I don’t think there’s really any argument about whether or not it’s handwaving. The question is whether or not it’s justified handwaving in the pursuit of a pseudo-rigorous understanding of quantum mechanics.
I’m comfortable with him saying that time evolution is linear, because there are intuitive reasons for it to be so, and he presents those reasons elsewhere.
I’m less comfortable with the use of them in this article. Take the following quote:
It’s really hard to make sense of this given the way Feynman paths are treated earlier. I can make sense of it if I rely on what traditional training I’ve had in quantum mechanics, but not everyone has that background.
‘Handwaving’ describes vagueness. Yet, just how much vagueness qualifies as ‘handwaving’ is not well-defined!
This builds on the result of ‘joint configurations’, which is that for interference to occur, everything needs to line up. EVERYTHING. Otherwise, it’s offset in some dimension or other, and not really in the same ‘place’ at all. With that in place, this is a short step to take.
I don’t disagree? I’m making essentially an aesthetic point.
I thought I qualified how much vagueness was acceptable—there is vagueness that is pedagogically useful, and there is vagueness that is not pedagogically useful, and my accusation of handwaving is isomorphic to saying that the vagueness with Feynman paths here is not pedagogically useful.
I can’t follow this explanation at all. Too many ambiguous pronouns. But this is okay; the goal isn’t to explain it to me—I have all the training in quantum mechanics that I care to have.
“Everything needs to line up” is the key point, and it once you understand it it’s really quite simple. It just means that there is more than one way to get to the same configuration state. Think about history seeming to branch out in a tree-like way, as most people tend to imagine. But if two branching paths are not far apart (e.g. differing by just a single photon) then it is easy for then to come back together. History changes from a tree to a graph. Being a graph means that some point has two history paths (actually every point has an infinite amount of ancestry but most of it cancels out). When you more than one history path both constructive and destructive interference can take place, and destructive means that the probability of some states goes down, i.e. some final states no longer happen (you no longer see a photon appearing in some places).
Is this making it clearer or have I made it worse? ;-)
See the comments on How Many Worlds? for why introducing the graph metaphor is confusing and negatively helpful to beginners.
Well, true, a graph implies a discreteness that does not correlate closely to a continuous configuration space. I actually think of it as the probability of finding yourself in that volume of configuration space being influenced by “significant” amplitudes slowing from more than one other volume of configuration space, although even that is not a great explanation as it suggests a ticking of a discrete time parameter. A continuously propagating wavefront is probably a much better analogy. Or we can just go into calculus mode and consider boxes of configuration space which we then shrink down arbitrarily while taking a limit value. But sometimes it’s just easier to think “branches” ;-)
I’m tapping out.
Nobody seems to think EY’s exposition is an issue, and you’re the second person who’s tried—and I can’t understand the motivation for this—to explain the underlying QM to me in vague metaphors that neither reflect the underlying theory nor present a pedagogical simplification.
But it does reflect the underlying theory (though it does take special cases and simplifies), and it does present a pedagogical simplification (because it’s a hell of a lot easier than solving huge quantum systems. Heck, it’s not even a metaphor. A DAG is blank enough—has few enough intrinsic properties—to be an incomplete model instead of a metaphor.
Does anything other than a fully quantum description of a system using only an interacting-particle hamiltonian with no externally applied fields count as a non-vague non-metaphor?