That is not what the uncertainty principle says. The uncertainty principle says that you can’t measure two complementary observables such as position and momentum or energy and time to arbitrary accuracy at the same time. However it does not say that you can’t measure any one observable such as position or momentum to an arbitrary degree of accuracy.
If you have a set of entangled particles, would it not be possible to measure one aspect of each particle in the set, and thus achieve a fully accurate observation?
Do you know calculus? If so, it will be very easy to explain what the uncertainty principle actually means quantitatively, which should reduce any qualitative confusion.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
It’s a shame I never took a class on Quantum Mechanics. Most descriptions I’ve heard of it, even from professors, are indistinguishable from magical thinking.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
Ok. Here’s the brief sketch with slightly simplified details:
In classical mechanics, “position” and “momentum” are different features, and so can be totally independent. In quantum mechanics, “position” and “momentum” are both derived from the same source (the wavefunction), and thus are dependent. In QM, reality is the wavefunction. This is a complex-valued continuous function over the spatial dimensions of the universe which integrates to a certain amount. Let’s consider a universe with only one particle in it:
If you want to find out something classically recognizable about that particle, you use an operator on the wavefunction. (The classical values now come with probabilities attached to them, and in realistic situations it only makes sense to ascribe probabilities to position and momentum ranges, even though energy is restricted to particular values.)
For the position of the particle, this corresponds to integrating the magnitude of the wavefunction across the part of space that you’re interested in. For the momentum of the particle, the operator is the derivative, which cashes out as taking its Fourier transform. The more localized a particle is in location-space, the more spread out it is in momentum-space, because the Fourier transform of something narrow is broad, and the Fourier transform of something broad is narrow.
Now, what about entanglement? Let’s add some more particles to our universe; now, the wavefunction is defined over three spatial dimensions per particle. In typical situations, we can factor the wavefunction of the universe into independent wavefunctions for each particle, which are then multiplied together. When particles are entangled, that means we can’t factor the universe’s wavefunction when it comes to the set of entangled particles- they’re dependent on each other / unified in some way. This doesn’t alter where position and momentum come from- they’re both still the same functions of the wavefunction, with the same fundamental restriction.
[edit] My interpretation of the EPR Paradox is that it basically asserts the reality of the wavefunction, and that the wavefunction is over the universe, not particular particles. I think this is the majority view but I haven’t paid too much attention to the issue.
erm...not quite...you technically run into “knowledge about each element of a system versus knowledge about the entire system” tradeoffs.
Although...you CAN partially bypass “no quantum Xerox” if you have a large sample. It’s the principle used in error-correction for quantum computers.
Take a laser. point it at a perfect polarizer of unknown orientation, and fire a pulse. Send the photons that get through one by one through filters of known orientation as you hone in convergently (hehe) on the orientation of the first polarizer.
There is a tiny chance that you won’t have a remotely correct value, and you never get exact with a finite sample, but you can probably do well enough to satisfy the typical engineer with only a “couple hundred” photons.
I’m not a physicist, and I couldn’t give a technical explanation of why that won’t work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You’re not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn’t talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.
That is not what the uncertainty principle says. The uncertainty principle says that you can’t measure two complementary observables such as position and momentum or energy and time to arbitrary accuracy at the same time. However it does not say that you can’t measure any one observable such as position or momentum to an arbitrary degree of accuracy.
If you have a set of entangled particles, would it not be possible to measure one aspect of each particle in the set, and thus achieve a fully accurate observation?
Do you know calculus? If so, it will be very easy to explain what the uncertainty principle actually means quantitatively, which should reduce any qualitative confusion.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
It’s a shame I never took a class on Quantum Mechanics. Most descriptions I’ve heard of it, even from professors, are indistinguishable from magical thinking.
Ok. Here’s the brief sketch with slightly simplified details:
In classical mechanics, “position” and “momentum” are different features, and so can be totally independent. In quantum mechanics, “position” and “momentum” are both derived from the same source (the wavefunction), and thus are dependent. In QM, reality is the wavefunction. This is a complex-valued continuous function over the spatial dimensions of the universe which integrates to a certain amount. Let’s consider a universe with only one particle in it:
If you want to find out something classically recognizable about that particle, you use an operator on the wavefunction. (The classical values now come with probabilities attached to them, and in realistic situations it only makes sense to ascribe probabilities to position and momentum ranges, even though energy is restricted to particular values.)
For the position of the particle, this corresponds to integrating the magnitude of the wavefunction across the part of space that you’re interested in. For the momentum of the particle, the operator is the derivative, which cashes out as taking its Fourier transform. The more localized a particle is in location-space, the more spread out it is in momentum-space, because the Fourier transform of something narrow is broad, and the Fourier transform of something broad is narrow.
Now, what about entanglement? Let’s add some more particles to our universe; now, the wavefunction is defined over three spatial dimensions per particle. In typical situations, we can factor the wavefunction of the universe into independent wavefunctions for each particle, which are then multiplied together. When particles are entangled, that means we can’t factor the universe’s wavefunction when it comes to the set of entangled particles- they’re dependent on each other / unified in some way. This doesn’t alter where position and momentum come from- they’re both still the same functions of the wavefunction, with the same fundamental restriction.
[edit] My interpretation of the EPR Paradox is that it basically asserts the reality of the wavefunction, and that the wavefunction is over the universe, not particular particles. I think this is the majority view but I haven’t paid too much attention to the issue.
erm...not quite...you technically run into “knowledge about each element of a system versus knowledge about the entire system” tradeoffs. Although...you CAN partially bypass “no quantum Xerox” if you have a large sample. It’s the principle used in error-correction for quantum computers.
Take a laser. point it at a perfect polarizer of unknown orientation, and fire a pulse. Send the photons that get through one by one through filters of known orientation as you hone in convergently (hehe) on the orientation of the first polarizer.
There is a tiny chance that you won’t have a remotely correct value, and you never get exact with a finite sample, but you can probably do well enough to satisfy the typical engineer with only a “couple hundred” photons.
I’m not a physicist, and I couldn’t give a technical explanation of why that won’t work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You’re not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn’t talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.
I didn’t come up with it. It’s called the EPR Paradox.
Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.
I’m going to read the QM sequence now. I have always been confused by descriptions of QM.