But if there is an infinite regress of causality, I should find that highly curious, and would like to hear Explained why it is allowed, and why this infinite regress exists rather than some other one.
To question the infinite chain of explanations you must first observe that it is indeed infinite. If the terminal explanation is always “just around the corner” you’ll never reach that point.
I don’t understand why you assign a lower probability to the possibility of an infinite regress of causality, than to the possibility of a non casual event or casual loop.
At some point, the answer may become “we cannot know”. For example, in quantum mechanics, the uncertainty principle tells us that there is a limit to the accuracy of our measurements, and once we hit that limit, attaining more accuracy is impossible. The big bang is similar—if time makes no sense in a singularity, perhaps we can’t know what happens before that point. Maybe at some point we will find a way around these limitations, in which case it was just another instance of hitting Explain and letting science grind along, but it could be that we have already reached the ultimate limit, and no more explanations will ever come.
That is not what the uncertainty principle says. The uncertainty principle says that you can’t measure two complementary observables such as position and momentum or energy and time to arbitrary accuracy at the same time. However it does not say that you can’t measure any one observable such as position or momentum to an arbitrary degree of accuracy.
If you have a set of entangled particles, would it not be possible to measure one aspect of each particle in the set, and thus achieve a fully accurate observation?
Do you know calculus? If so, it will be very easy to explain what the uncertainty principle actually means quantitatively, which should reduce any qualitative confusion.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
It’s a shame I never took a class on Quantum Mechanics. Most descriptions I’ve heard of it, even from professors, are indistinguishable from magical thinking.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
Ok. Here’s the brief sketch with slightly simplified details:
In classical mechanics, “position” and “momentum” are different features, and so can be totally independent. In quantum mechanics, “position” and “momentum” are both derived from the same source (the wavefunction), and thus are dependent. In QM, reality is the wavefunction. This is a complex-valued continuous function over the spatial dimensions of the universe which integrates to a certain amount. Let’s consider a universe with only one particle in it:
If you want to find out something classically recognizable about that particle, you use an operator on the wavefunction. (The classical values now come with probabilities attached to them, and in realistic situations it only makes sense to ascribe probabilities to position and momentum ranges, even though energy is restricted to particular values.)
For the position of the particle, this corresponds to integrating the magnitude of the wavefunction across the part of space that you’re interested in. For the momentum of the particle, the operator is the derivative, which cashes out as taking its Fourier transform. The more localized a particle is in location-space, the more spread out it is in momentum-space, because the Fourier transform of something narrow is broad, and the Fourier transform of something broad is narrow.
Now, what about entanglement? Let’s add some more particles to our universe; now, the wavefunction is defined over three spatial dimensions per particle. In typical situations, we can factor the wavefunction of the universe into independent wavefunctions for each particle, which are then multiplied together. When particles are entangled, that means we can’t factor the universe’s wavefunction when it comes to the set of entangled particles- they’re dependent on each other / unified in some way. This doesn’t alter where position and momentum come from- they’re both still the same functions of the wavefunction, with the same fundamental restriction.
[edit] My interpretation of the EPR Paradox is that it basically asserts the reality of the wavefunction, and that the wavefunction is over the universe, not particular particles. I think this is the majority view but I haven’t paid too much attention to the issue.
erm...not quite...you technically run into “knowledge about each element of a system versus knowledge about the entire system” tradeoffs.
Although...you CAN partially bypass “no quantum Xerox” if you have a large sample. It’s the principle used in error-correction for quantum computers.
Take a laser. point it at a perfect polarizer of unknown orientation, and fire a pulse. Send the photons that get through one by one through filters of known orientation as you hone in convergently (hehe) on the orientation of the first polarizer.
There is a tiny chance that you won’t have a remotely correct value, and you never get exact with a finite sample, but you can probably do well enough to satisfy the typical engineer with only a “couple hundred” photons.
I’m not a physicist, and I couldn’t give a technical explanation of why that won’t work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You’re not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn’t talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.
I don’t believe that for a second though. Everything we know is likely as wrong a phlogiston, though our predictions are surely getting more accuate. “We cannot know” is just hitting the worship button—which I’m fine with if you are talking about “what’s the meaning of life.” However, this is the mechanics of the universe, so we should probably stay away from that particular button in this case. Don’t forget a singularity is Not an anomaly in reality itself, it is an anomaly in our models’ ability to predict was will happen in reality. So time makes no sense in a singularity—that means the model for time will need to be changed. That’s not the same as there is no answer.
I say “must” in the Worship option. It is irony.
But if there is an infinite regress of causality, I should find that highly curious, and would like to hear Explained why it is allowed, and why this infinite regress exists rather than some other one.
To question the infinite chain of explanations you must first observe that it is indeed infinite. If the terminal explanation is always “just around the corner” you’ll never reach that point.
I don’t understand why you assign a lower probability to the possibility of an infinite regress of causality, than to the possibility of a non casual event or casual loop.
At some point, the answer may become “we cannot know”. For example, in quantum mechanics, the uncertainty principle tells us that there is a limit to the accuracy of our measurements, and once we hit that limit, attaining more accuracy is impossible. The big bang is similar—if time makes no sense in a singularity, perhaps we can’t know what happens before that point. Maybe at some point we will find a way around these limitations, in which case it was just another instance of hitting Explain and letting science grind along, but it could be that we have already reached the ultimate limit, and no more explanations will ever come.
That is not what the uncertainty principle says. The uncertainty principle says that you can’t measure two complementary observables such as position and momentum or energy and time to arbitrary accuracy at the same time. However it does not say that you can’t measure any one observable such as position or momentum to an arbitrary degree of accuracy.
If you have a set of entangled particles, would it not be possible to measure one aspect of each particle in the set, and thus achieve a fully accurate observation?
Do you know calculus? If so, it will be very easy to explain what the uncertainty principle actually means quantitatively, which should reduce any qualitative confusion.
I know calculus. Not enough to enjoy looking at the harmonic equation though.
It’s a shame I never took a class on Quantum Mechanics. Most descriptions I’ve heard of it, even from professors, are indistinguishable from magical thinking.
Ok. Here’s the brief sketch with slightly simplified details:
In classical mechanics, “position” and “momentum” are different features, and so can be totally independent. In quantum mechanics, “position” and “momentum” are both derived from the same source (the wavefunction), and thus are dependent. In QM, reality is the wavefunction. This is a complex-valued continuous function over the spatial dimensions of the universe which integrates to a certain amount. Let’s consider a universe with only one particle in it:
If you want to find out something classically recognizable about that particle, you use an operator on the wavefunction. (The classical values now come with probabilities attached to them, and in realistic situations it only makes sense to ascribe probabilities to position and momentum ranges, even though energy is restricted to particular values.)
For the position of the particle, this corresponds to integrating the magnitude of the wavefunction across the part of space that you’re interested in. For the momentum of the particle, the operator is the derivative, which cashes out as taking its Fourier transform. The more localized a particle is in location-space, the more spread out it is in momentum-space, because the Fourier transform of something narrow is broad, and the Fourier transform of something broad is narrow.
Now, what about entanglement? Let’s add some more particles to our universe; now, the wavefunction is defined over three spatial dimensions per particle. In typical situations, we can factor the wavefunction of the universe into independent wavefunctions for each particle, which are then multiplied together. When particles are entangled, that means we can’t factor the universe’s wavefunction when it comes to the set of entangled particles- they’re dependent on each other / unified in some way. This doesn’t alter where position and momentum come from- they’re both still the same functions of the wavefunction, with the same fundamental restriction.
[edit] My interpretation of the EPR Paradox is that it basically asserts the reality of the wavefunction, and that the wavefunction is over the universe, not particular particles. I think this is the majority view but I haven’t paid too much attention to the issue.
erm...not quite...you technically run into “knowledge about each element of a system versus knowledge about the entire system” tradeoffs. Although...you CAN partially bypass “no quantum Xerox” if you have a large sample. It’s the principle used in error-correction for quantum computers.
Take a laser. point it at a perfect polarizer of unknown orientation, and fire a pulse. Send the photons that get through one by one through filters of known orientation as you hone in convergently (hehe) on the orientation of the first polarizer.
There is a tiny chance that you won’t have a remotely correct value, and you never get exact with a finite sample, but you can probably do well enough to satisfy the typical engineer with only a “couple hundred” photons.
I’m not a physicist, and I couldn’t give a technical explanation of why that won’t work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You’re not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn’t talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.
I didn’t come up with it. It’s called the EPR Paradox.
Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.
I’m going to read the QM sequence now. I have always been confused by descriptions of QM.
I don’t believe that for a second though. Everything we know is likely as wrong a phlogiston, though our predictions are surely getting more accuate. “We cannot know” is just hitting the worship button—which I’m fine with if you are talking about “what’s the meaning of life.” However, this is the mechanics of the universe, so we should probably stay away from that particular button in this case. Don’t forget a singularity is Not an anomaly in reality itself, it is an anomaly in our models’ ability to predict was will happen in reality. So time makes no sense in a singularity—that means the model for time will need to be changed. That’s not the same as there is no answer.