The Elitzur-Vaidman bomb testing device is an example of a similar phenomenon. What law of physics precludes the construction of a device that measures blood sugar but with the needle (virtually never) penetrating the skin?
And if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done.
So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations.
So for the purposes of understanding what our physics says can happen, it becomes reasonable to posit that devices have been constructed that can test the activity of Elitzur-Vaidman bombs without (usual) detonation or measure blood sugars without needles (usually) penetrating the skin. It is reasonable to posit this because the known laws of physics do not forbid this.
So those who do not believe in the multiverse but still believe in their own rationality do need to answer the question,
“Where is the arm from which the blood was drawn?”
Or, individuals denying the possibility of such a measuring device being constructed need to posit a new law of physics that prevents Elitzur-Vaidman bomb testing devices from being constructed and blood sugar measuring devices (that do not penetrate the skin) from being constructed.
In the Elitzur-Vaidman bomb test, information about whether the bomb has exploded does not feed into the experiment at any point. When you shoot photons through the interferometer, you are not directly testing whether the bomb would explode or has exploded elsewhere in the multiverse; you are testing whether the sensitive photon detector in the bomb trigger works.
As wnoise said, to directly gather information from a possible history, the history has to end in a physical configuration identical to the one it is being compared with. The two histories represent two paths through the multiverse, if you wish, with a separate flow of quantum amplitude along each path in configuration space, and then the flows combine and add when the histories recombine by converging on the same configuration.
In the case of an exploded bomb, this means that for a history in which the bomb explodes to interfere with a history in which the bomb does not explode, the bomb has to reassemble somehow! And in a way which does not leave any other physical traces of the bomb having exploded.
In the case of your automated blood glucose meter coupled to a quantum switch, for the history where the reading occurs to interfere with the history where the reading does not occur, the reading and all its physical effects must similarly be completely undone. Which is going to be a problem since the needle pricked flesh and a pain signal was probably conveyed to the subject’s brain, creating a memory trace. You said something about “briefly freezing a small component of blood and skin on a live person”, so maybe you appreciate this need for total reversibility.
In the case of counterfactual measurements which have actually been performed, very simple quantum systems were involved, simple enough that the reversibility, or the maintenance of quantum coherence, was in fact possible.
However, I totally grant you that the much more difficult macro-superpositions appear to be possible in principle, and that this does pose a challenge for single-world interpretations of quantum theory. They need to either have a single-world explanation for where the counterfactual information comes from, or an explanation as to why the macro-superpositions are not possible even in principle.
Such explanations do in fact exist. I’ll show how it works again using the Elitzur-Vaidman bomb test.
The bomb test uses destructive interference as its test pattern. Destructive interference is seen in the dark zones in the double slit experiment. Those are the regions where (in a sum-over-histories perspective) there are two ways to get there (through one slit, through the other slit), but the amplitudes for the two ways cancel, so the net probability is zero. The E-V bomb-testing apparatus contains a beam splitter, a “beam recombiner”, and two detectors. It is set up so that when the beam proceeds unimpeded through the apparatus, there is total destructive interference between the two pathways leading to one of the detectors, so the particles are only ever observed to arrive at the other detector. But if you place an object capable of interacting with the particle in one of the paths, that will modify the portion of the wavefunction traveling along that path (part of the wavefunction will be absorbed by the object), the destructive interference at the end will only be partial, and so particles will sometimes be observed to arrive at that detector.
The many-worlds explanation is that when the object is there, it creates a new subset of worlds where the particle is absorbed en route, this disturbs the balance between worlds, and so now there are some worlds where the particle makes it to the formerly forbidden detector.
Now consider John Cramer’s transactional interpretation. This interpretation is all about self-consistent standing waves connecting past and future, via a transaction, a handshake across time, between “advanced” and “retarded” electromagnetic potentials (in the case of light). It’s like the Novikov self-consistency principle for wormhole histories; events arrange themselves so as to avoid paradox because logically they have to. That’s how I understand Cramer’s idea.
So, in the transactional framework, how do we explain the E-V bomb test? The apparatus, the experimental setup, defines the boundary conditions for the standing waves. When we have the interferometer with both pathways unimpeded (or with a “dud bomb”, which means that the photon detector in its trigger isn’t working, which means the photon passes right through it), the only self-consistent outcome is the one where the photon makes it to the detector experiencing constructive interference. But when there is an object in one pathway capable of absorbing a photon, we have three self-consistent outcomes: photon goes to one detector, photon goes to other detector, photon is absorbed by the object (which then explodes if it’s an E-V bomb, but that outcome is not part of the transaction, it’s an external causal consequence).
In general, the transactional interpretation explains counterfactual measurement or counterfactual computation through the constraint of self-consistency. The presence of causal chains moving in opposite temporal directions in a single history produces correlations and constraints which are nonlocal in space and time. By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities.
A completely different sort of explanation would be offered by an objective collapse theory like Penrose’s. Here, the prediction simply is that such macro-superpositions do not exist. By the way, in Penrose’s case, he is not just arbitrarily stipulating that macro-superpositions do not happen. He was led to this position by a quantum-gravity argument that superpositions of significantly different geometries are dynamically undefined. In general relativity, the rate of passage of time is internal to the geometry, but to evolve a superposition of geometries would require some calibration of one geometry’s time against the other. Penrose argued that there was no natural way to do this and suggested that this is when wavefunction collapse occurs. I doubt that the argument holds up in string theory, but anyway, for argument’s sake let’s consider how a theory like this analyzes the E-V bomb-testing experiment. The critical observation is that it’s only the photon detector in the bomb trigger which matters for the experiment, not the whole bomb; and even then, it’s not the whole photon detector, but just that particular combination of atoms and electrons which interacts with the photon. So the superposition required for the experiment to work is not macro at all, it’s micro but it’s coupled to macro devices.
This is a really good case study for quantum interpretation; I had to engage in quite a bit of thought and research to analyze it even this much. But the single-world schools of thought are not bereft of explanations even here.
“By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities (in the transactional interpretation)”
But note then that these “logical possibilities” must render a complete map of the blood and all its atomic and subatomic components and oxygen concentration, because without these components and a heart beating properly to oxygenate the blood, the measurement of the blood sugar would be wrong. But without an atmosphere and a universe that allows an atmosphere to have appropriate oxygen content and lungs to breath in the oxygen, the blood sugar measurement would also be wrong.
But it is not wrong.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe. Does anyone seriously doubt that multiple different measurements could be made on this so-called merely “possible” universe to make sure that it performs like ours? (Blood sugar measurement, dimensions of room in which experiment was performed, color of wall, etc.)
It is almost humorous to have to ask, “What is the difference between a map that renders every single aspect of a territory, including its subatomic structure, and the territory?”
It is strangely sad (and a tribute to positivism) that we must think that just because we cannot see the needle penetrating the skin, this implies that the blood is merely possible blood, not actual blood. Does our examination of fossils of dinosaurs really imply the mere existence of only possible dinosaurs, just because we can’t see the dinosaurs right now?
So, in order to eliminate the multiverse theory, opponents must believe that blood sugar measurements—on blood—in people—on planets—in a universe—are somehow not real just because you can’t see the needle penetrate the skin. What else is philosophically different from measuring our own blood? Why do we not call our own blood mere possible blood, because when we measure that we also only see the results of the measurement through the lens of our own implicit neuropsychological theories. All data is interpreted through theory, whether it is data about another universe or our own.
Or one must formulate a new law of physics, as Penrose does. Note that one formulates this new law, not because the old laws are not working, but merely because the multiverse conclusion does not seem right to him. I appreciate his honesty in implicitly agreeing that the multiverse conclusion follows unless a new law of physics is invented.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe.
Slow down there. In order to “simulate” the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter.
The basic claim of the counterfactualists is: We can find out about a possible state of X—call it X’ - by inducing a temporary superposition in X—schematically, |X> goes to |X>+|X’>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X’ by examining the final state of that other system, but X’ itself never actually existed, just X.
So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others.
Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X’, and then X again, and that X’ while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X’ simply because situation X’ really did exist for a while, and it left a trace of its existence in something else.
So here is the real issue: The discourse of quantum mechanics is full of “superpositions”. Not just E-V bomb-testing and a superposition which goes from one component to two and back to one—but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in persistent multi-component superpositions; complicated macro-superpositions which appear to be theoretically possible. A single-world interpretation of quantum theory has to deal with superpositions of all sorts, in a way such that multiplicity of possibility does not equate to multiplicity of actuality. That might be achieved by saying that only one component of the superposition is the reality, or even by saying that none of them is, and that what is real (the hidden variables) is something else entirely.
The Copenhagen interpretation does this, but it does it in a non-explanatory way. The wavefunction is just a predictive device; the particle is always somewhere in particular; and it always happens to be where the predictive device says it will be. You can say that, but really you need to say how that manages to be true. You need a model of microscopic dynamics which explains why the wavefunction works. So we can agree that the Copenhagen interpretation is inadequate for a final theory.
Bohm’s theory has a single world, but then it uses the wavefunction to guide the events there, so the multiverse seems to be there implicitly. You can, however, rewrite Bohm’s equation of motion so it makes no reference to a pilot wave. Instead, you have a nonlocal potential. I’m not aware of significant work in this direction so I don’t know if it is capable of truly banishing the multiverse from its explanation of QM.
Anyway, let me return to this issue of interpreting a generic superposition in a single-world way and make some further comments. You should not assume that just because, formally and mathematically, we can write about something like |cat dead>+|cat alive>, that there simply must be some physical reality where both cat-dead information and cat-alive information can be extracted with equal ease. This is very far from having been demonstrated.
First of all, on a purely classical level, I can easily cook up a probability distribution like “50% probability the cat is dead and 50% probability the cat is alive”, but I cannot deduce from that, that the multiverse must be real. That the formalism can talk about macro-superpositions doesn’t yet make them real.
So in order to make things difficult for the single-world theorist, we need superpositions where the various branches seem to have an equal claim on reality, e.g. where we can probe at will for information about macroscopically distinct situations existing within the superposition. Schrodinger’s cat doesn’t really work because you only ever see the cat dead or alive. If you could somehow open the lid and see the cat alive, and yet also get a photo from a video camera in the box which showed the cat to be dead—now that would be evidence of a multiverse!
Counterfactual measurement and counterfactual computation certainly sound like this. Thus, in counterfactual computation, you couple your readout system to a quantum computer, and then you do the |X> to |X>+|X’> thing and back again, where X’ is “quantum computer makes a computation”. So the computer is back in the X state, and the computation never was, but it left its counterfactual trace in the readout system. It’s as if you opened the lid on the box and the cat was alive, yet the video camera gave you a photo of the cat dead.
However, the superpositions and entanglements involved in these experiments are so micro, and so far from anything macroscopic and familiar, that to talk about them in these ways is very much a rhetorical choice. A common-sense interpretation of counterfactual measurement would be that you are simply measuring an existing property which would produce the counterfactual behavior under the right circumstances. Thus, I might look at a house of cards and claim that it would fall down in the presence of a strong wind. But I didn’t arrive at this correct conclusion by mysteriously probing a parallel world where a strong wind actually blows; I arrived at the conclusion by observing the very fine balance of forces existing among the cards in this world. I learnt something true by observing something actual, not by paranormally observing something possible.
The argument that counterfactual measurement implies many-worlds is applying this common-sense principle, but it’s also assuming that the actuality which provides the information cannot be anything less than a duplicate world. Because counterfactual measurement has so far only been performed on microscopic quantum systems, we do not get to apply macroscopic intuitions as we could in the situation of a counterfactual photograph of Schrodinger’s cat.
To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is—the one which remembers the interaction with the branch where the needle did go in. We need a physically complete description of the thought-experiment in order to reason about the interpretive possibilities.
Sampling the bloodstream is too complicated because of all the complexities of human metabolism. But something like reversibly sampling a salt crystal and measuring ion density—that sounds a more plausible candidate for analysis, something where the experimental setup can be described and where the analysis is tractable. I’ll have to see if there’s a proposed experiment or known thought-experiment which is suitable…
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question,
“Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that
the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
If [...] the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.
The Elitzur-Vaidman bomb testing device is an example of a similar phenomenon. What law of physics precludes the construction of a device that measures blood sugar but with the needle (virtually never) penetrating the skin?
And if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done.
So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations.
So for the purposes of understanding what our physics says can happen, it becomes reasonable to posit that devices have been constructed that can test the activity of Elitzur-Vaidman bombs without (usual) detonation or measure blood sugars without needles (usually) penetrating the skin. It is reasonable to posit this because the known laws of physics do not forbid this.
So those who do not believe in the multiverse but still believe in their own rationality do need to answer the question, “Where is the arm from which the blood was drawn?”
Or, individuals denying the possibility of such a measuring device being constructed need to posit a new law of physics that prevents Elitzur-Vaidman bomb testing devices from being constructed and blood sugar measuring devices (that do not penetrate the skin) from being constructed.
If they posit this new law, what is it?
In the Elitzur-Vaidman bomb test, information about whether the bomb has exploded does not feed into the experiment at any point. When you shoot photons through the interferometer, you are not directly testing whether the bomb would explode or has exploded elsewhere in the multiverse; you are testing whether the sensitive photon detector in the bomb trigger works.
As wnoise said, to directly gather information from a possible history, the history has to end in a physical configuration identical to the one it is being compared with. The two histories represent two paths through the multiverse, if you wish, with a separate flow of quantum amplitude along each path in configuration space, and then the flows combine and add when the histories recombine by converging on the same configuration.
In the case of an exploded bomb, this means that for a history in which the bomb explodes to interfere with a history in which the bomb does not explode, the bomb has to reassemble somehow! And in a way which does not leave any other physical traces of the bomb having exploded.
In the case of your automated blood glucose meter coupled to a quantum switch, for the history where the reading occurs to interfere with the history where the reading does not occur, the reading and all its physical effects must similarly be completely undone. Which is going to be a problem since the needle pricked flesh and a pain signal was probably conveyed to the subject’s brain, creating a memory trace. You said something about “briefly freezing a small component of blood and skin on a live person”, so maybe you appreciate this need for total reversibility.
In the case of counterfactual measurements which have actually been performed, very simple quantum systems were involved, simple enough that the reversibility, or the maintenance of quantum coherence, was in fact possible.
However, I totally grant you that the much more difficult macro-superpositions appear to be possible in principle, and that this does pose a challenge for single-world interpretations of quantum theory. They need to either have a single-world explanation for where the counterfactual information comes from, or an explanation as to why the macro-superpositions are not possible even in principle.
Such explanations do in fact exist. I’ll show how it works again using the Elitzur-Vaidman bomb test.
The bomb test uses destructive interference as its test pattern. Destructive interference is seen in the dark zones in the double slit experiment. Those are the regions where (in a sum-over-histories perspective) there are two ways to get there (through one slit, through the other slit), but the amplitudes for the two ways cancel, so the net probability is zero. The E-V bomb-testing apparatus contains a beam splitter, a “beam recombiner”, and two detectors. It is set up so that when the beam proceeds unimpeded through the apparatus, there is total destructive interference between the two pathways leading to one of the detectors, so the particles are only ever observed to arrive at the other detector. But if you place an object capable of interacting with the particle in one of the paths, that will modify the portion of the wavefunction traveling along that path (part of the wavefunction will be absorbed by the object), the destructive interference at the end will only be partial, and so particles will sometimes be observed to arrive at that detector.
The many-worlds explanation is that when the object is there, it creates a new subset of worlds where the particle is absorbed en route, this disturbs the balance between worlds, and so now there are some worlds where the particle makes it to the formerly forbidden detector.
Now consider John Cramer’s transactional interpretation. This interpretation is all about self-consistent standing waves connecting past and future, via a transaction, a handshake across time, between “advanced” and “retarded” electromagnetic potentials (in the case of light). It’s like the Novikov self-consistency principle for wormhole histories; events arrange themselves so as to avoid paradox because logically they have to. That’s how I understand Cramer’s idea.
So, in the transactional framework, how do we explain the E-V bomb test? The apparatus, the experimental setup, defines the boundary conditions for the standing waves. When we have the interferometer with both pathways unimpeded (or with a “dud bomb”, which means that the photon detector in its trigger isn’t working, which means the photon passes right through it), the only self-consistent outcome is the one where the photon makes it to the detector experiencing constructive interference. But when there is an object in one pathway capable of absorbing a photon, we have three self-consistent outcomes: photon goes to one detector, photon goes to other detector, photon is absorbed by the object (which then explodes if it’s an E-V bomb, but that outcome is not part of the transaction, it’s an external causal consequence).
In general, the transactional interpretation explains counterfactual measurement or counterfactual computation through the constraint of self-consistency. The presence of causal chains moving in opposite temporal directions in a single history produces correlations and constraints which are nonlocal in space and time. By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities.
A completely different sort of explanation would be offered by an objective collapse theory like Penrose’s. Here, the prediction simply is that such macro-superpositions do not exist. By the way, in Penrose’s case, he is not just arbitrarily stipulating that macro-superpositions do not happen. He was led to this position by a quantum-gravity argument that superpositions of significantly different geometries are dynamically undefined. In general relativity, the rate of passage of time is internal to the geometry, but to evolve a superposition of geometries would require some calibration of one geometry’s time against the other. Penrose argued that there was no natural way to do this and suggested that this is when wavefunction collapse occurs. I doubt that the argument holds up in string theory, but anyway, for argument’s sake let’s consider how a theory like this analyzes the E-V bomb-testing experiment. The critical observation is that it’s only the photon detector in the bomb trigger which matters for the experiment, not the whole bomb; and even then, it’s not the whole photon detector, but just that particular combination of atoms and electrons which interacts with the photon. So the superposition required for the experiment to work is not macro at all, it’s micro but it’s coupled to macro devices.
This is a really good case study for quantum interpretation; I had to engage in quite a bit of thought and research to analyze it even this much. But the single-world schools of thought are not bereft of explanations even here.
“By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities (in the transactional interpretation)”
But note then that these “logical possibilities” must render a complete map of the blood and all its atomic and subatomic components and oxygen concentration, because without these components and a heart beating properly to oxygenate the blood, the measurement of the blood sugar would be wrong. But without an atmosphere and a universe that allows an atmosphere to have appropriate oxygen content and lungs to breath in the oxygen, the blood sugar measurement would also be wrong.
But it is not wrong.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe. Does anyone seriously doubt that multiple different measurements could be made on this so-called merely “possible” universe to make sure that it performs like ours? (Blood sugar measurement, dimensions of room in which experiment was performed, color of wall, etc.)
It is almost humorous to have to ask, “What is the difference between a map that renders every single aspect of a territory, including its subatomic structure, and the territory?”
It is strangely sad (and a tribute to positivism) that we must think that just because we cannot see the needle penetrating the skin, this implies that the blood is merely possible blood, not actual blood. Does our examination of fossils of dinosaurs really imply the mere existence of only possible dinosaurs, just because we can’t see the dinosaurs right now?
So, in order to eliminate the multiverse theory, opponents must believe that blood sugar measurements—on blood—in people—on planets—in a universe—are somehow not real just because you can’t see the needle penetrate the skin. What else is philosophically different from measuring our own blood? Why do we not call our own blood mere possible blood, because when we measure that we also only see the results of the measurement through the lens of our own implicit neuropsychological theories. All data is interpreted through theory, whether it is data about another universe or our own.
Or one must formulate a new law of physics, as Penrose does. Note that one formulates this new law, not because the old laws are not working, but merely because the multiverse conclusion does not seem right to him. I appreciate his honesty in implicitly agreeing that the multiverse conclusion follows unless a new law of physics is invented.
Slow down there. In order to “simulate” the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter.
The basic claim of the counterfactualists is: We can find out about a possible state of X—call it X’ - by inducing a temporary superposition in X—schematically, |X> goes to |X>+|X’>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X’ by examining the final state of that other system, but X’ itself never actually existed, just X.
So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others.
Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X’, and then X again, and that X’ while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X’ simply because situation X’ really did exist for a while, and it left a trace of its existence in something else.
So here is the real issue: The discourse of quantum mechanics is full of “superpositions”. Not just E-V bomb-testing and a superposition which goes from one component to two and back to one—but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in persistent multi-component superpositions; complicated macro-superpositions which appear to be theoretically possible. A single-world interpretation of quantum theory has to deal with superpositions of all sorts, in a way such that multiplicity of possibility does not equate to multiplicity of actuality. That might be achieved by saying that only one component of the superposition is the reality, or even by saying that none of them is, and that what is real (the hidden variables) is something else entirely.
The Copenhagen interpretation does this, but it does it in a non-explanatory way. The wavefunction is just a predictive device; the particle is always somewhere in particular; and it always happens to be where the predictive device says it will be. You can say that, but really you need to say how that manages to be true. You need a model of microscopic dynamics which explains why the wavefunction works. So we can agree that the Copenhagen interpretation is inadequate for a final theory.
Bohm’s theory has a single world, but then it uses the wavefunction to guide the events there, so the multiverse seems to be there implicitly. You can, however, rewrite Bohm’s equation of motion so it makes no reference to a pilot wave. Instead, you have a nonlocal potential. I’m not aware of significant work in this direction so I don’t know if it is capable of truly banishing the multiverse from its explanation of QM.
Anyway, let me return to this issue of interpreting a generic superposition in a single-world way and make some further comments. You should not assume that just because, formally and mathematically, we can write about something like |cat dead>+|cat alive>, that there simply must be some physical reality where both cat-dead information and cat-alive information can be extracted with equal ease. This is very far from having been demonstrated.
First of all, on a purely classical level, I can easily cook up a probability distribution like “50% probability the cat is dead and 50% probability the cat is alive”, but I cannot deduce from that, that the multiverse must be real. That the formalism can talk about macro-superpositions doesn’t yet make them real.
So in order to make things difficult for the single-world theorist, we need superpositions where the various branches seem to have an equal claim on reality, e.g. where we can probe at will for information about macroscopically distinct situations existing within the superposition. Schrodinger’s cat doesn’t really work because you only ever see the cat dead or alive. If you could somehow open the lid and see the cat alive, and yet also get a photo from a video camera in the box which showed the cat to be dead—now that would be evidence of a multiverse!
Counterfactual measurement and counterfactual computation certainly sound like this. Thus, in counterfactual computation, you couple your readout system to a quantum computer, and then you do the |X> to |X>+|X’> thing and back again, where X’ is “quantum computer makes a computation”. So the computer is back in the X state, and the computation never was, but it left its counterfactual trace in the readout system. It’s as if you opened the lid on the box and the cat was alive, yet the video camera gave you a photo of the cat dead.
However, the superpositions and entanglements involved in these experiments are so micro, and so far from anything macroscopic and familiar, that to talk about them in these ways is very much a rhetorical choice. A common-sense interpretation of counterfactual measurement would be that you are simply measuring an existing property which would produce the counterfactual behavior under the right circumstances. Thus, I might look at a house of cards and claim that it would fall down in the presence of a strong wind. But I didn’t arrive at this correct conclusion by mysteriously probing a parallel world where a strong wind actually blows; I arrived at the conclusion by observing the very fine balance of forces existing among the cards in this world. I learnt something true by observing something actual, not by paranormally observing something possible.
The argument that counterfactual measurement implies many-worlds is applying this common-sense principle, but it’s also assuming that the actuality which provides the information cannot be anything less than a duplicate world. Because counterfactual measurement has so far only been performed on microscopic quantum systems, we do not get to apply macroscopic intuitions as we could in the situation of a counterfactual photograph of Schrodinger’s cat.
To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is—the one which remembers the interaction with the branch where the needle did go in. We need a physically complete description of the thought-experiment in order to reason about the interpretive possibilities.
Sampling the bloodstream is too complicated because of all the complexities of human metabolism. But something like reversibly sampling a salt crystal and measuring ion density—that sounds a more plausible candidate for analysis, something where the experimental setup can be described and where the analysis is tractable. I’ll have to see if there’s a proposed experiment or known thought-experiment which is suitable…
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question, “Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
I think I see where we are disagreeing.
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
I have enjoyed this discussion.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.