What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Answering from within a zigzag interpretation:
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
The Novikov self-consistency principle for classical wormhole space-times seems like it might pose similar challenges.
By the way, can’t I ask you, as a many-worlder, precisely the same question—does A happen first, or does B happen first?
My understanding was that Eliezer is more taking time out of the equation than worrying about which “happen[ed] first.”
His questions make no sense to me from a timeless perspective. They seem remarkably unsophisticated for him.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
This is definitely an area where I wouldn’t presume to have my own opinion.
Still, I’m pretty convinced that Porter and Yudkowsky have really learned something about quantum physics.