Decoherence as Projection
Previously in series: The Born Probabilities
In “The So-Called Heisenberg Uncertainty Principle” we got a look at how decoherence can affect the apparent surface properties of objects: By measuring whether a particle is to the left or right of a dividing line, you can decohere the part of the amplitude distribution on the left with the part on the right. Separating the amplitude distribution into two parts affects its future evolution (within each component) because the two components can no longer interfere with each other.
Yet there are more subtle ways to take apart amplitude distributions than by splitting the position basis down the middle. And by exploring this, we rise further up the rabbit hole.
(Remember, the classical world is Wonderland, the quantum world is reality. So when you get deeper into quantum physics, you are going up the rabbit hole, not down the rabbit hole.)
Light has a certain quantum property called “polarization”. Of course, all known physical properties are “quantum properties”, but in this case I mean that polarization neatly exhibits fundamental quantum characteristics. I mention this, because polarization is often considered part of “classical” optics. Why? Because the quantum nature of polarization is so simple that it was accidentally worked out as part of classical mechanics, back when light was thought to be a wave.
(Nobody tell the marketers, though, or we’ll be wearing “quantum sunglasses”.)
I don’t usually begin by discussing the astronomically high-level phenomena of macroscopic physics, but in this case, I think it will be helpful to begin with a human-world example...
I hand you two little sheets of semi-transparent material, looking perhaps like dark plastic, with small arrows drawn in marker along the sides. When you hold up one of the sheets in front of you, the scene through it is darker—it blocks some of the light.
Now you hold up the second sheet in front of the first sheet...
When the two arrows are aligned, pointing in the same direction, the scene is no darker than before—that is, the two sheets in series block the same amount of light as the first sheet alone.
But as you rotate the second sheet, so that the two arrows point in increasingly different directions, the world seen through both sheets grows darker. When the arrows are at 45° angles, the world is half as bright as when you were only holding up one sheet.
When the two arrows are perpendicular (90°) the world is completely black.
Then, as you continue rotating the second sheet, the world gets lighter again. When the two arrows point in opposite directions, again the lightness is the same as for only one sheet.
Clearly, the sheets are selectively blocking light. Let’s call the sheets “polarized filters”.
Now, you might reason something like this: “Light is built out of two components, an up-down component and a left-right component. When you hold up a single filter, with the arrow pointing up, it blocks out the left-right component of light, and lets only the up-down component through. When you hold up another filter in front of the first one, and the second filter has the arrow pointing to the left (or the right), it only allows the left-right component of light, and we already blocked that out, so the world is completely dark. And at intermediate angles, it, um, blocks some of the light that wasn’t blocked already.”
So I ask, “Suppose you’ve already put the second filter at a 45° angle to the first filter. Now you put up the third filter at a 45° angle to the second filter. What do you expect to see?”
“That’s ambiguous,” you say. “Do you mean the third filter to end up at a 0° angle to the first filter, or a 90° angle to the first filter?”
“Good heavens,” I say, “I’m surprised I forgot to specify that! Tell me what you expect either way.”
“If the third filter is at a 0° angle to the first filter,” you say, “It won’t block out anything the first filter hasn’t blocked already. So we’ll be left with the half-light world, from the second filter being at a 45° angle to the first filter. And if the third filter is at a 90° angle to the first filter, it will block out everything that the first filter didn’t block, and the world will be completely dark.”
I hand you a third filter. “Go ahead,” I say, “Try it.”
First you set the first filter at 0° and the second filter at 45°, as your reference point. Half the light gets through.
Then you set the first filter at 0°, the second filter at 45°, and the third filter at 0°. Now one quarter of the light gets through.
“Huh?” you say.
“Keep going,” I reply.
With the first filter at 0°, the second filter at 45°, and the third filter at 90°, one quarter of the light goes through. Again.
“Umm...” you say. You quickly take out the second filter, and find that the world goes completely dark. Then you put in the second filter, again at 45°, and the world resumes one-quarter illumination.
Further investigation quickly verifies that all three filters seem to have the same basic properties—it doesn’t matter what order you put them in.
“All right,” you say, “that just seems weird.” You pause. “So it’s probably something quantum.”
Indeed it is.
Though light may seem “dim” or “bright” at the macroscopic level, you can’t split it up indefinitely; you can always send a single photon into the series of filters, and ask what happens to that single photon.
As you might suspect, if you send a single photon through the succession of three filters, you will find that—assuming the photon passes the first filter (at 0°)—the photon is observed to pass the second filter (at 45°) with 50% probability, and, if the photon does pass the second filter, then it seems to pass the third filter (at 90°) with 50% probability.
The appearance of “probability” in deterministic amplitude evolutions, as we now know, is due to decoherence. Each time a photon was blocked, some other you saw it go through. Each time a photon went through, some other you saw it blocked.
But what exactly is getting decohered? And why does an intervening second filter at 45°, let some photons pass that would otherwise be blocked by the 0° filter plus the 90° filter?
First: We can represent the polarization of light as a complex amplitude for up-down plus a complex amplitude for left-right. So polarizations might be written as (1 ; 0) or (0 ; -i) or (√.5 ; √.5), with the units (up-down ; left-right). It is more customary to write these as column vectors, but row vectors are easier to type.
(Note that I say that this is a way to “represent” the polarization of light. There’s nothing magical about picking up-down vs. left-right, instead of upright-downleft vs. upleft-downright. The vectors above are written in an arbitrary but convenient basis. This will become clearer.)
Let’s say that the first filter has its little arrow pointing right. This doesn’t mean that the filter blocks any photon whose polarization is not exactly (0 ; 1) or a multiple thereof. But it nonetheless happens that all the photons which we see leave the first filter, will have a polarization of (0 ; 1) or some irrelevantly complex multiple thereof. Let’s just take this for granted, for the moment. Past the first filter at 0°, we’re looking at a stream of photons purely polarized in the left-right direction.
Now the photons hit a second filter. Let’s say the second filter is at a 30° angle to the first—so the arrow written on the second filter is pointing 30° above the horizontal.
Then each photon has a 25% probability of being blocked at the second filter, and a 75% probability of going through.
How about if the second filter points to 20° above the horizontal? 12% probability of blockage, 88% probability of going through.
45°, 50⁄50.
The general rule is that the probability of being blocked is the squared sine of the angle, and the probability of going through is the squared cosine of the angle.
Why?
First, remember two rules we’ve picked up about quantum mechanics: The evolution of quantum systems is linear and unitary. When an amplitude distribution breaks into parts that then evolve separately, the components must (1) add to the original distribution and (2) have squared moduli adding to the squared modulus of the original distribution.
So now let’s consider the photons leaving the first filter, with “polarizations”, quantum states, of (0 ; 1).
To understand what happens when the second filter is set at a 45° angle, we observe… and think of this as a purely abstract statement about 2-vectors… that:
(0 ; 1) = (.5 ; .5) + (-.5 ; .5)
Okay, so the two vectors on the right-hand-side sum to (0 ; 1) on the left-hand-side.
But what about the squared modulus? Just because two vectors sum to a third, doesn’t mean that the squares of the first two vectors’ lengths sum to the square of the third vector’s length.
The squared length of the vector (.5 ; .5) is (.5)2 + (.5)2 = .25 + .25 = 0.5. And likewise the squared length of the vector (-.5 ; .5) is (-.5)2 + (.5)2 = 0.5. The sum of the squares is 0.5 + 0.5 = 1. Which matches the squared length of the vector (0 ; 1).
So when you decompose (0 ; 1) into (.5 ; .5) + (-.5 ; .5), this obeys both linearity and unitarity: The two parts sum to the original, and the squared modulus of the parts sums to the squared modulus of the original.
When you interpose the second filter at an angle of 45° from the first, it decoheres the incoming amplitude of (0 ; 1) into an amplitude of (.5 ; .5) for being transmitted and an amplitude of (-.5 ; .5) for being blocked. Taking the squared modulus of the amplitudes gives us the observed Born probabilities, i.e. fifty-fifty.
What if you interposed the second filter at an angle of 30° from the first? Then that would decohere the incoming amplitude vector of (0 ; 1) into the vectors (.433 ; .75) and (-.433, .25). The squared modulus of the first vector is .75, and the squared modulus of the second vector is .25, again summing to one.
A polarized filter projects the incoming amplitude vector into the two sides of a right triangle that sums to the original vector, and decoheres the two components. And so, under Born’s rule, the transmission and absorption probabilities are given by the Pythagorean Theorem.
(!)
A filter set at 0° followed by a filter set at 90° will block all light—any photon that emerges from the first filter will have an amplitude vector of (0 ; 1), and the component in the direction of (1 ; 0) will be 0. But suppose that instead you put an intermediate filter at 45°. This will decohere the vector of (0 ; 1) into a transmission vector of (.5 ; .5) and an absorption amplitude of (-.5 ; .5).
A photon that is transmitted through the 45° filter will have a polarization amplitude vector of (.5 ; .5). (The (-.5 ; .5) component is decohered into another world where you see the photon absorbed.)
This photon then hits the 90° filter, whose transmission amplitude is the component in the direction of (1 ; 0), and whose absorption amplitude is the component in the direction of (0 ; 1). (.5 ; .5) has a component of (.5 ; 0) in the direction of (1 ; 0) and a component of (0 ; .5) in the direction of (0 ; 1). So it has an amplitude of (.5 ; 0) to make it through both filters, which translates to a Born probability of .25.
Likewise if the second filter is at −45°. Then it decoheres the incoming (0 ; 1) into a transmission amplitude of (-.5 ; .5) and an absorption amplitude of (.5 ; .5). When (-.5 ; .5) hits the third filter at 90°, it has a component of (-.5 ; 0) in the direction of (1 ; 0), and because these are complex numbers we’re talking about, (-.5 ; 0) has a squared modulus of 0.25, that is, 25% probability to go through both filters.
It may seem surprising that putting in an extra filter causes more photons to go through, even when you send them one at a time; but that’s quantum physics for you.
“But wait,” you say, “Who needs the second filter? Why not just use math? The initial amplitude of (0 ; 1) breaks into an amplitude of (-.5 ; .5) + (.5 ; .5) whether or not you have the second filter there. By linearity, the evolution of the parts should equal the evolution of the whole.”
Yes, indeed! So, with no second filter—just the 0° filter and the 90° filter—here’s how we’d do that analysis:
First, the 0° filter decoheres off all amplitude of any incoming photons except the component in the direction of (0 ; 1). Now we look at the photon—which has some amplitude (0 ; x) that we’ve implicitly been renormalizing to (0 ; 1)—and, in a purely mathematical sense, break it up into (.5x ; .5x) and (-.5x ; .5x) whose squared moduli will sum to x2.
Now first we consider the (.5x ; .5x) component; it strikes the 90° filter which transmits the component (.5x ; 0) and absorbs the (0 ; .5x) component.
Next we consider the (-.5x ; .5x) component. It also strikes the 90° filter, which transmits the component (-.5x ; 0) and absorbs the component (0 ; .5x).
Since no other particles are entangled, we have some identical configurations here: Namely, the two configurations where the photon is transmitted, and the two configurations where the photon is absorbed.
Summing the amplitude vectors of (.5x ; 0) and (-.5x ; 0) for transmission, we get a total amplitude vector of (0 ; 0).
Summing the amplitude vectors of (0 ; .5x) and (0 ; .5x) for absorption, we get an absorption amplitude of (0 ; x).
So all photons that make it through the first filter are blocked.
Remember Experiment 2 from way back when? Opening up a new path to a detector can cause fewer photons to be detected, because the new path has an amplitude of opposite sign to some existing path, and they cancel out.
In an exactly analogous manner, having a filter that sometimes absorbs photons, can cause more (individual) photons to get through a series of filters. Think of it as decohering off a component of the amplitude that would otherwise destructively interfere with another component.
A word about choice of basis:
You could just as easily create a new basis in which (1 ; 0) = (.707 ; .707) and (0 ; 1) = (.707 ; -.707). This is the upright-downleft and upleft-downright basis of which I spoke before. .707 = √.5, so the basis vectors individually have length 1; and the dot product of the two vectors is 0, so they are orthogonal. That is, they are “orthonormal”.
The new basis is just as valid as a compass marked NW, NE, SE, SW instead of N, E, S, W. There isn’t an absolute basis of the photon’s polarization amplitude vector, any more than there’s an absolute three-coordinate system that describes your location in space. Ideally, you should see the photon’s polarization as a purely abstract 2-vector in complex space.
(One of my great “Ahas!” while reading the Feynman Lectures was the realization that, rather than a 3-vector being made out of an ordered list of 3 scalars, a 3-vector was just a pure mathematical object in a vector algebra. If you wanted to take the 3-vector apart for some reason, you could generate an arbitrary orthonormal basis and get 3 scalars that way. In other words, you didn’t build the vector space by composing scalars—you built the decomposition from within the vector space. I don’t know if that makes any sense to my readers out there, but it was the great turning point in my relationship with linear algebra.)
Oh, yes, and what happens if you have a complex polarization in the up-down/left-right basis, like (.707i ; .707)? Then that corresponds to “circular polarization” or “elliptical polarization”. All the polarizations I’ve been talking about are “linear polarizations”, where the amplitudes in the up-down/left-right basis happen to be real numbers.
When things decohere, they decohere into pieces that add up to the original (linearity) and whose squared moduli add up to the original squared modulus (unitarity). If the squared moduli of the pieces add up to the original squared modulus, this implies the pieces are orthogonal—that the components have inner products of zero with each other. That is why the title of this blog post is “Decoherence as Projection”.
A word about how not to see this whole business of polarization:
Some ancient textbooks will say that when you send a photon through a 0° filter, and it goes through, you’ve learned that the photon is polarized left-right rather than up-down. Now you measure it with another filter at a 45° angle, and it goes through, so you’ve learned that the photon is polarized upright-downleft rather than upleft-downright. And (says the textbook) this second measurement “destroys” the first, so that if you want to know the up-down / left-right polarization, you’ll have to measure it all over again.
Because you can’t know both at the same time.
And some of your more strident ancient textbooks will say something along the lines of: the up-down / left-right polarization no longer exists after the photon goes through the 45° filter. It’s not just unknown, it doesn’t exist, and—
(you might think that wasn’t too far from the truth)
—it is meaningless to even talk about it.
Okay. That’s going a bit too far.
There are ways to use a polarizer to split a beam into two components, rather than absorbing a component and transmitting a component.
Suppose you first send the photons through a 0° filter. Then you send them through a 45° splitter. Then you recombine the beams. Then you send the photons through a 0° filter again. All the photons that made it past the first filter, will make it past the third filter as well. Because, of course, you’ve put the components back together again, and (.5 ; .5) + (-.5 ; .5) = (0 ; 1).
This doesn’t seem to square with the idea that measuring the 45° polarization automatically destroys the up-down/left-right polarization, that it isn’t even meaningful to talk about it.
Of course the one will say, “Ah, but now you no longer know which path the photon took past the splitter. When you recombined the beams, you unmeasured the photon’s 45° polarization, and the original 0° polarization popped back into existence again, and it was always meaningful to talk about it.”
O RLY?
Anyway, that’s all talk about classical surface appearances, and you’ve seen the underlying quantum reality. A photon with polarization of (-.707 ; .707) has a component of (.707 ; 0) in the up-down direction and a component of (0 ; .707) in the left-right direction. If you happened to feed it into an apparatus that decohered these two components—like a polarizing filter—then you would be able to predict the decoherent evolution as a deterministic fact about the amplitude distribution, and the Born probabilities would (deterministically if mysteriously) come out to 50⁄50.
Now someone comes along and says that the result of this measurement you may or may not perform, doesn’t exist or, better yet, isn’t meaningful.
It’s hard to see what this startling statement could mean, let alone how it could improve your experimental predictions. How would you falsify it?
Part of The Quantum Physics Sequence
Next post: “Entangled Photons”
Previous post: “The Born Probabilities”
- The Quantum Physics Sequence by 11 Jun 2008 3:42 UTC; 72 points) (
- Bell’s Theorem: No EPR “Reality” by 4 May 2008 4:44 UTC; 39 points) (
- The Born Probabilities by 1 May 2008 5:50 UTC; 37 points) (
- And the Winner is… Many-Worlds! by 12 Jun 2008 6:05 UTC; 28 points) (
- Spooky Action at a Distance: The No-Communication Theorem by 5 May 2008 2:43 UTC; 22 points) (
- An Intuitive Explanation of Quantum Mechanics by 12 Jun 2008 3:45 UTC; 18 points) (
- Entangled Photons by 3 May 2008 7:20 UTC; 15 points) (
- Quantum Physics Revealed As Non-Mysterious by 12 Jun 2008 5:20 UTC; 13 points) (
- [SEQ RERUN] Decoherence as Projection by 23 Apr 2012 5:11 UTC; 6 points) (
- 15 May 2011 17:06 UTC; -59 points) 's comment on Designing Rationalist Projects by (
Not a comment on the theory, but if you want to play with the experiments yourself, find some old LCD electronics (calculators, etc) that can be sacrificed on the altar of curiosity. They typically have a strip of polarizing material above the display (rather, they did when I was growing up).
It’s a bit more elegant than trying to get some sunglasses oriented at 90° to each other.
Why doesn’t someone make some circular sunglasses that have two polarized disks that you can rotate with respect to each other?
I was surprised to find out that someone did, but it probably doesn’t work very well since nobody seems to retail them. Possible problems could include irregularities in the polarization, like what makes the rainbows in car windows when you’re wearing a single polarized lens.
The idea occurred to me several years ago, but I passed up on it, since it seemed like it would be difficult to make it with lenses that weren’t circular, and those aren’t really in style (or are even that effective, given the shape of the human face).
Well, it isn’t quite that, but I made an analogue of it prompted by that exact same thought. Movie 3-d glasses are polarized (the two slightly different images on the screen have orthogonal polarizations, so each image only goes through one lens), so if you can sneak two or more pairs of 3-d glasses out of a movie theater, you can pop the lenses out of one pair, and tape them on the other pair (rotated so that almost all light is canceled out.) The resulting cross-polarized improvised glasses are so dark, that if you made them just right, it is possible to stare straight at the sun and see sunspots. However, this makes them quite useless for most other purposes.
I asked this question in the Born Probabilites post but it didn’t get answered so I try again because I think it is important, and it concerns decoherence so it fits here:
A major problem with Robin’s theory is that it seems to predict things like, We should find ourselves in a universe in which lots of decoherence events have already taken place,” which tendency does not seem especially apparent. Actually the theory suggests we should find ourselves in a state with near the least feasible number of past decoherence eventsI don’t understand this—doesn’t decoherence occur all the time, in every quantum interaction between all amplitudes all the time? So, like for every amptlitude separate enough to be a “particle” (bad talk, I know ;-) in the universe (=factor) every planck time it will decohere with other factors?
So: all possible factorizations (=decoherence) occur, and not only when one prepares a quantum experiment.
Or did I misunderstand something big time here?
Cheers, Peter
@Boris: Already patented.
There is, of course, a fairly simple alternative solution, dealing with “real” particles; the photons coming out of the filters are not the photons that went in. Photons don’t travel through the sheet; the energy is absorbed, and the properties of individual components of energy determine what happens next. The properties of some chunks of energy cause similarly-propertied energy to be re-emitted on the other side. It’s not that the photons have mysteriously lost the information about their “spin” in the middle sheet—it’s that we’re dealing with new photons with new property sets, which are being re-emitted with the emission properties of the second sheet, rather than the first.
With this interpretation, the phenomenon makes perfect sense, and the old textbooks are right—after a fashion—that the second measurement destroyed the information that the first measurement generated.
Adirian, doesn’t explain why recombining the split beams reproduces the old, “destroyed” orientation. In any case, the fundamental physics are already known.
What happens in the split and recombined beams case if only one photon is emitted at a time? Does it still have a 50% chance of transmission through all three?
What happens if you make one of the split paths significantly longer, by about a unit of light-time longer than the pulse of light?
What happens if you send only one photon through at a time, with different path times?
What happens if you make the split path 60 light-milliseconds or so even longer, and put a shutter near the recombiner that can selectively block or transmit the split path? What if the shutter is controlled by the intensity of the first half of a received pulse?
What happens if you send a single photon through that path, and rig the shutter to block the alternate path if it detects a hit in the time required for the direct path?
http://arxiv.org/abs/1310.4691
These experiments have been done.
That particular link is to a study that doesn’t recombine beams, and is still fully explained by a classical model. It does show that if you change the polarization of one of an entangled pair, the polarization of the other does not change.
But I’m concerned about something different. Specifically, I’m responding to the observation that if a beam is polarized to vertical, split via PBS to \ and / components, and then recombined, the recombined beam is vertically polarized (as measured by passing through a vertical/horizontal PBS and being directed to two detectors). I’m asking what happens if the beam is recombined after only one half of the beam has been delayed for longer than the beam length.
If / is delayed by 100ms (roughly the distance to geostationary orbit, although materials with a lower speed of light might be used) and the pulse length is ~30ms, I expect the detectors to indicate a 30ms pulse equally between them, followed 100ms later by another. I have not found an experiment that tests this or a closely analogous case.
If / is precise to within the limits of experimentation, but the pulse length is a single photon, I expect the detectors to detect with 50% chance. I believe that I have seen summaries of experiments that say my expectations are incorrect. I assume here that it is not within our ability to match two path lengths to within the time it takes light to travel the distance occupied by one photon, if nothing else due to Brownian motion of the lowest-temperature medium we can have.
If my expectation in the former case is incorrect, I ask what happens if there is a shutter placed in the path of /, such that iff the detectors indicate a vertically polarized beam, / is blocked before it recombines.
The fundamental descriptive mathematics are known—the interpretations are still debated. As has been the case for nearly a century now, and I don’t see that changing anytime in the immediate future. And if you recombine all four sets of split beams, then there isn’t anything interesting going on there, either; half still goes through, same as before, and predictably so. That is, if you direct one polarization one direction, and another in another, and then recombine them—and there’s the snag, see. You can’t combine them without re-emitting both of them; you’re performing an additional operation which is generating/modifying information. You aren’t reproducing lost information; you’re generating new information which is equivalent to the lost information.
For the fundamental physics to be known, they must be falsifiable, and have passed that test. This is not the case. The mathematics are passing with flying colors, of course—nobody is entirely sure what the mathematics mean, however. (Everybody thinks they do, though.)
Adirian: You aren’t reproducing lost information; you’re generating new information which is equivalent to the lost information.
An interesting but ultimately futile attempt at semantic hair-splitting.
If I redirect the two beams of a polarizing splitter together, I get back the same polarization that went in—regardless of what that incoming polarization was. I am not producing a new beam from scratch that “just happens” to match the incoming polarization, because if I change the incoming polarization, the outgoing polarization that has been produced as “new information” changes in a systematically corresponding way.
By definition, by Liouville’s Theorem, and by the Markov property of time, when information can be systematically “generated as new” in a way that precisely corresponds to old information, that old information has not been “lost”.
Furthermore, information about incoming polarization shows up mathematically in the relative phase of the split beams, so only someone who believes the wavefunction is a hallucination would think the information has been “lost”. It’s right there in the amplitude distribution.
Eliezer, are you going to address the objection that we don’t know the physics, only the modeling math?
Eliezer -
“Information” in this case is the properties; my apologies, I am loose with language. The properties were transformed—and, in the case of a splitting beam, with a 1-1 function. The properties were “lost” when they were split—they weren’t the same as they were before. But they weren’t irrecoverably lost. (At least close enough for testing; you may have medium degradation, i/e, property attenuation, depending upon the quality of the crystals and the intermediate material provided it isn’t in a vacuum.)
To irrecoverably lose properties, you need a non 1-1 function—which is exactly what we had when we sent them through the filter rather than the splitter.
circular sunglasses that have two polarized disks
(1) Circular sunglasses sem to be out of fashion at the moment.
(2) the polarization of sunglasses is chosen to eliminate glare from reflections of a rain-soaked street.
Hair splitting? I hardly think so. That’s too close to confirmation bias.
If you look at the hyperphysics page on light you’ll find many frequencies of color A and color B mixing to match one color C. Our eyes only measure the energy and call that a color.
low A (the lower frequency) + high B (the higher frequency) equals high A (the lower frequency) + low B (the higher frequency).
If anyone can see both the energy and the interference pattern then they could tell them apart. It seems that most people can’t.
But there you have a case where information is destroyed because it is indistinguishable.
This article has left me confused. I don’t see how this notion of phase relates back to a locally-evolving amplitude distribution over a space of configurations, if those configurations are merely the locations of some indistinguishable lumps in a field. How do these phases become manifest in the configuration space? It seems like there must be more to the configuration space than just a set of locations for each type of “particle”, but previous articles have claimed otherwise.
The configuration space is a field where there’s a number at each place. This number is a complex number.
Does that help?
This left me totally confused too.
But then, I realized that there is a property of photons that can help with this confusion here: spin. So, the configuration space is not a “a photon here and a photon there...”, but a “a photon with a +1 spin here, a photon with −1 spin there...” And then this phase thing arises from the values of the amplitude distribution for the configurations with photons of opposite spins. This makes the math quite a bit easier too.
I might be completely mistaken about this, though.
So I was following this post all the way through, right up until I got to this part:
It isn’t explained what actually occurs here and I’ve been unable to decypher the following paragraphs as a result. Could this be clarified, please?
Combine a beam splitter with two polarizing filters.
EDIT: I read the relevant paragraph, and what I described above probably doesn’t work. However, not all is lost: there is a device that more precisely does what Eliezer says: Beam-splitting polarizers.
Don’t oriented oscillating E,B fields explain this adequately at the macroscopic level? If you orient a polarizer at an angle theta to the orientation of the E field of the electromagnetic wave (i.e. light), the field gets projected as Ecos(theta) (the component perpendicular to the polarizer gets absorbed) and so the intensity goes as E^2cos^2(theta). That obeys the same mathematics without invoking the quantum magic wand.
Yep, Maxwell equations do produce the same results. The fun quantum thing is that this also happens with individual photons.
What about if one path length is longer than the other, by either more than one light-beam length, or, in the case of an individual photon, more than one light-photon length? I’m assuming that matching two different paths to that level of precision is improbable even intentionally.
Can a beam of light or a single photon interact with itself non-locally? What if the alternate path has a different detector intermittently intercepting it?
I used the tangent of the angle between polarisers, and at 45 degrees, tan is 1, hence the light goes through. This is classical.