“By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities (in the transactional interpretation)”
But note then that these “logical possibilities” must render a complete map of the blood and all its atomic and subatomic components and oxygen concentration, because without these components and a heart beating properly to oxygenate the blood, the measurement of the blood sugar would be wrong. But without an atmosphere and a universe that allows an atmosphere to have appropriate oxygen content and lungs to breath in the oxygen, the blood sugar measurement would also be wrong.
But it is not wrong.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe. Does anyone seriously doubt that multiple different measurements could be made on this so-called merely “possible” universe to make sure that it performs like ours? (Blood sugar measurement, dimensions of room in which experiment was performed, color of wall, etc.)
It is almost humorous to have to ask, “What is the difference between a map that renders every single aspect of a territory, including its subatomic structure, and the territory?”
It is strangely sad (and a tribute to positivism) that we must think that just because we cannot see the needle penetrating the skin, this implies that the blood is merely possible blood, not actual blood. Does our examination of fossils of dinosaurs really imply the mere existence of only possible dinosaurs, just because we can’t see the dinosaurs right now?
So, in order to eliminate the multiverse theory, opponents must believe that blood sugar measurements—on blood—in people—on planets—in a universe—are somehow not real just because you can’t see the needle penetrate the skin. What else is philosophically different from measuring our own blood? Why do we not call our own blood mere possible blood, because when we measure that we also only see the results of the measurement through the lens of our own implicit neuropsychological theories. All data is interpreted through theory, whether it is data about another universe or our own.
Or one must formulate a new law of physics, as Penrose does. Note that one formulates this new law, not because the old laws are not working, but merely because the multiverse conclusion does not seem right to him. I appreciate his honesty in implicitly agreeing that the multiverse conclusion follows unless a new law of physics is invented.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe.
Slow down there. In order to “simulate” the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter.
The basic claim of the counterfactualists is: We can find out about a possible state of X—call it X’ - by inducing a temporary superposition in X—schematically, |X> goes to |X>+|X’>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X’ by examining the final state of that other system, but X’ itself never actually existed, just X.
So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others.
Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X’, and then X again, and that X’ while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X’ simply because situation X’ really did exist for a while, and it left a trace of its existence in something else.
So here is the real issue: The discourse of quantum mechanics is full of “superpositions”. Not just E-V bomb-testing and a superposition which goes from one component to two and back to one—but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in persistent multi-component superpositions; complicated macro-superpositions which appear to be theoretically possible. A single-world interpretation of quantum theory has to deal with superpositions of all sorts, in a way such that multiplicity of possibility does not equate to multiplicity of actuality. That might be achieved by saying that only one component of the superposition is the reality, or even by saying that none of them is, and that what is real (the hidden variables) is something else entirely.
The Copenhagen interpretation does this, but it does it in a non-explanatory way. The wavefunction is just a predictive device; the particle is always somewhere in particular; and it always happens to be where the predictive device says it will be. You can say that, but really you need to say how that manages to be true. You need a model of microscopic dynamics which explains why the wavefunction works. So we can agree that the Copenhagen interpretation is inadequate for a final theory.
Bohm’s theory has a single world, but then it uses the wavefunction to guide the events there, so the multiverse seems to be there implicitly. You can, however, rewrite Bohm’s equation of motion so it makes no reference to a pilot wave. Instead, you have a nonlocal potential. I’m not aware of significant work in this direction so I don’t know if it is capable of truly banishing the multiverse from its explanation of QM.
Anyway, let me return to this issue of interpreting a generic superposition in a single-world way and make some further comments. You should not assume that just because, formally and mathematically, we can write about something like |cat dead>+|cat alive>, that there simply must be some physical reality where both cat-dead information and cat-alive information can be extracted with equal ease. This is very far from having been demonstrated.
First of all, on a purely classical level, I can easily cook up a probability distribution like “50% probability the cat is dead and 50% probability the cat is alive”, but I cannot deduce from that, that the multiverse must be real. That the formalism can talk about macro-superpositions doesn’t yet make them real.
So in order to make things difficult for the single-world theorist, we need superpositions where the various branches seem to have an equal claim on reality, e.g. where we can probe at will for information about macroscopically distinct situations existing within the superposition. Schrodinger’s cat doesn’t really work because you only ever see the cat dead or alive. If you could somehow open the lid and see the cat alive, and yet also get a photo from a video camera in the box which showed the cat to be dead—now that would be evidence of a multiverse!
Counterfactual measurement and counterfactual computation certainly sound like this. Thus, in counterfactual computation, you couple your readout system to a quantum computer, and then you do the |X> to |X>+|X’> thing and back again, where X’ is “quantum computer makes a computation”. So the computer is back in the X state, and the computation never was, but it left its counterfactual trace in the readout system. It’s as if you opened the lid on the box and the cat was alive, yet the video camera gave you a photo of the cat dead.
However, the superpositions and entanglements involved in these experiments are so micro, and so far from anything macroscopic and familiar, that to talk about them in these ways is very much a rhetorical choice. A common-sense interpretation of counterfactual measurement would be that you are simply measuring an existing property which would produce the counterfactual behavior under the right circumstances. Thus, I might look at a house of cards and claim that it would fall down in the presence of a strong wind. But I didn’t arrive at this correct conclusion by mysteriously probing a parallel world where a strong wind actually blows; I arrived at the conclusion by observing the very fine balance of forces existing among the cards in this world. I learnt something true by observing something actual, not by paranormally observing something possible.
The argument that counterfactual measurement implies many-worlds is applying this common-sense principle, but it’s also assuming that the actuality which provides the information cannot be anything less than a duplicate world. Because counterfactual measurement has so far only been performed on microscopic quantum systems, we do not get to apply macroscopic intuitions as we could in the situation of a counterfactual photograph of Schrodinger’s cat.
To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is—the one which remembers the interaction with the branch where the needle did go in. We need a physically complete description of the thought-experiment in order to reason about the interpretive possibilities.
Sampling the bloodstream is too complicated because of all the complexities of human metabolism. But something like reversibly sampling a salt crystal and measuring ion density—that sounds a more plausible candidate for analysis, something where the experimental setup can be described and where the analysis is tractable. I’ll have to see if there’s a proposed experiment or known thought-experiment which is suitable…
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question,
“Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that
the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
If [...] the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.
“By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities (in the transactional interpretation)”
But note then that these “logical possibilities” must render a complete map of the blood and all its atomic and subatomic components and oxygen concentration, because without these components and a heart beating properly to oxygenate the blood, the measurement of the blood sugar would be wrong. But without an atmosphere and a universe that allows an atmosphere to have appropriate oxygen content and lungs to breath in the oxygen, the blood sugar measurement would also be wrong.
But it is not wrong.
So this “logical possibility’ (blood sugar measurement with actual result) must simulate not only the blood, but the heart, the person, the planet on which he resides and the universe which houses the planet, in order for the combined quantum state to appropriately render a universe to calculate the correct results of a blood sugar measurement (or any other wanted measurement) that is made on this merely “possible” universe. Does anyone seriously doubt that multiple different measurements could be made on this so-called merely “possible” universe to make sure that it performs like ours? (Blood sugar measurement, dimensions of room in which experiment was performed, color of wall, etc.)
It is almost humorous to have to ask, “What is the difference between a map that renders every single aspect of a territory, including its subatomic structure, and the territory?”
It is strangely sad (and a tribute to positivism) that we must think that just because we cannot see the needle penetrating the skin, this implies that the blood is merely possible blood, not actual blood. Does our examination of fossils of dinosaurs really imply the mere existence of only possible dinosaurs, just because we can’t see the dinosaurs right now?
So, in order to eliminate the multiverse theory, opponents must believe that blood sugar measurements—on blood—in people—on planets—in a universe—are somehow not real just because you can’t see the needle penetrate the skin. What else is philosophically different from measuring our own blood? Why do we not call our own blood mere possible blood, because when we measure that we also only see the results of the measurement through the lens of our own implicit neuropsychological theories. All data is interpreted through theory, whether it is data about another universe or our own.
Or one must formulate a new law of physics, as Penrose does. Note that one formulates this new law, not because the old laws are not working, but merely because the multiverse conclusion does not seem right to him. I appreciate his honesty in implicitly agreeing that the multiverse conclusion follows unless a new law of physics is invented.
Slow down there. In order to “simulate” the behavior of an entity X using counterfactual measurement, you need X (both actually and counterfactually) to be isolated from the rest of the universe (interactions must be weak enough to not decohere the superposition). To say that we must be able to simulate the rest of the universe because we could instead be measuring Y, Z, etc is confusing the matter.
The basic claim of the counterfactualists is: We can find out about a possible state of X—call it X’ - by inducing a temporary superposition in X—schematically, |X> goes to |X>+|X’>, and then back to |X> - while it is coupled to some other quantum system. We find out something about X’ by examining the final state of that other system, but X’ itself never actually existed, just X.
So the core claim is that by having quantum control over an entity, you can find out about how it would behave, without actually making it behave that way. This applies to any entity or combination of entities, though it will be much easier for some than others.
Now first I want to point out that being a single-world theorist does not immediately make you a counterfactualist about these measurements. All a single-world theorist has to do is to explain quantum mechanics without talking about a multiverse. Suppose someone were to say of the above process that what actually existed was X, then X’, and then X again, and that X’ while it existed interacted a little with the auxiliary quantum system. Suddenly the counterfactualist magic is gone, and we know about X’ simply because situation X’ really did exist for a while, and it left a trace of its existence in something else.
So here is the real issue: The discourse of quantum mechanics is full of “superpositions”. Not just E-V bomb-testing and a superposition which goes from one component to two and back to one—but superpositions in great multitudes. Quantum computers in exponentially large superpositions; atoms and molecules in persistent multi-component superpositions; complicated macro-superpositions which appear to be theoretically possible. A single-world interpretation of quantum theory has to deal with superpositions of all sorts, in a way such that multiplicity of possibility does not equate to multiplicity of actuality. That might be achieved by saying that only one component of the superposition is the reality, or even by saying that none of them is, and that what is real (the hidden variables) is something else entirely.
The Copenhagen interpretation does this, but it does it in a non-explanatory way. The wavefunction is just a predictive device; the particle is always somewhere in particular; and it always happens to be where the predictive device says it will be. You can say that, but really you need to say how that manages to be true. You need a model of microscopic dynamics which explains why the wavefunction works. So we can agree that the Copenhagen interpretation is inadequate for a final theory.
Bohm’s theory has a single world, but then it uses the wavefunction to guide the events there, so the multiverse seems to be there implicitly. You can, however, rewrite Bohm’s equation of motion so it makes no reference to a pilot wave. Instead, you have a nonlocal potential. I’m not aware of significant work in this direction so I don’t know if it is capable of truly banishing the multiverse from its explanation of QM.
Anyway, let me return to this issue of interpreting a generic superposition in a single-world way and make some further comments. You should not assume that just because, formally and mathematically, we can write about something like |cat dead>+|cat alive>, that there simply must be some physical reality where both cat-dead information and cat-alive information can be extracted with equal ease. This is very far from having been demonstrated.
First of all, on a purely classical level, I can easily cook up a probability distribution like “50% probability the cat is dead and 50% probability the cat is alive”, but I cannot deduce from that, that the multiverse must be real. That the formalism can talk about macro-superpositions doesn’t yet make them real.
So in order to make things difficult for the single-world theorist, we need superpositions where the various branches seem to have an equal claim on reality, e.g. where we can probe at will for information about macroscopically distinct situations existing within the superposition. Schrodinger’s cat doesn’t really work because you only ever see the cat dead or alive. If you could somehow open the lid and see the cat alive, and yet also get a photo from a video camera in the box which showed the cat to be dead—now that would be evidence of a multiverse!
Counterfactual measurement and counterfactual computation certainly sound like this. Thus, in counterfactual computation, you couple your readout system to a quantum computer, and then you do the |X> to |X>+|X’> thing and back again, where X’ is “quantum computer makes a computation”. So the computer is back in the X state, and the computation never was, but it left its counterfactual trace in the readout system. It’s as if you opened the lid on the box and the cat was alive, yet the video camera gave you a photo of the cat dead.
However, the superpositions and entanglements involved in these experiments are so micro, and so far from anything macroscopic and familiar, that to talk about them in these ways is very much a rhetorical choice. A common-sense interpretation of counterfactual measurement would be that you are simply measuring an existing property which would produce the counterfactual behavior under the right circumstances. Thus, I might look at a house of cards and claim that it would fall down in the presence of a strong wind. But I didn’t arrive at this correct conclusion by mysteriously probing a parallel world where a strong wind actually blows; I arrived at the conclusion by observing the very fine balance of forces existing among the cards in this world. I learnt something true by observing something actual, not by paranormally observing something possible.
The argument that counterfactual measurement implies many-worlds is applying this common-sense principle, but it’s also assuming that the actuality which provides the information cannot be anything less than a duplicate world. Because counterfactual measurement has so far only been performed on microscopic quantum systems, we do not get to apply macroscopic intuitions as we could in the situation of a counterfactual photograph of Schrodinger’s cat.
To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is—the one which remembers the interaction with the branch where the needle did go in. We need a physically complete description of the thought-experiment in order to reason about the interpretive possibilities.
Sampling the bloodstream is too complicated because of all the complexities of human metabolism. But something like reversibly sampling a salt crystal and measuring ion density—that sounds a more plausible candidate for analysis, something where the experimental setup can be described and where the analysis is tractable. I’ll have to see if there’s a proposed experiment or known thought-experiment which is suitable…
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question, “Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
I think I see where we are disagreeing.
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
I have enjoyed this discussion.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.