“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question,
“Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that
the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
If [...] the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.
“To really make progress here, what we need is a thought-experiment in which a macroscopic superposition is made to yield information about more than one branch, as the counterfactualist rhetoric claims. Unfortunately, your needle-in-the-arm experiment is not there yet, because we haven’t gone into the exact details of how it’s supposed to work. You can’t just say, ‘If we did a quantum experiment where we could produce data about glucose levels in someone’s bloodstream, without the needle having gone into their arm, why, that would prove that the multiverse is real!’ Even just as a hypothetical, that’s not enough. You need to explain how the decoherence shielding works and what the quantum readout system is”
I think you are mistaken here, Mitchell. But let me first thank you for engaging. Most people, when confronted with different outcomes than they expected from the fully logical implications of their own thinking, run screaming from the room.
Perhaps someone could write on these very pages a detailed quantum mechanical and excellent description of a hypothetical experiment in which a “counterfactual” blood sugar measurement is made. But if so, would that then make you believe in the reality of the multiverse? It shouldn’t, from a logical point of view. Because my (or anyone else’s) ability to do that is completely irrelevant to the argument about the reality of the multiverse...
We are interested in the implications of our understanding of the current laws of physics. When we now talk about which “interpretation” of quantum mechanics is the correct one, and that is what I thought we were talking about, we are talking about interpreting the current laws of physics. (Right?) What do the currently understood laws of physics allow us to do, using whichever interpretation one wants, since each interpretation is supposed to give the same predictions. If all the interpretations say that we can make measurements on counterfactual realities, then do all of the interpretations still make logical sense?
I think I have not yet heard an answer to the question, “Is there a current law of physics that prohibits a blood sugar measuring device from measuring counterfactual blood sugars?
Since I doubt (but could be mistaken) that you are able to point to a current law of physics that says that such a device can’t be created, I will assume that you can’t. That’s OK. I can’t either.
To my knowledge there is no law of physics that says there is an in principle limit on the amount of complexity in a superposition. If there is, show me which one.
Since there is no limit in the current laws of physics about this (and I assume we are agreeing on this point), those who believe in any interpretation of quantum mechanics (that makes these same predictions) should also agree on this point.
So adherents to any of the legitimate quantum mechanical interpretations (e.g Copenhagen, Transactional, Bohm, Everettian) should also agree that our current laws of physics do not limit the amount of complexity in a superposition.
And if a law of physics does not prevent something, then it can be done given enough knowledge. This is the most important point. Do you (Mitchell) dispute this or can anyone point out why I am mistaken about it? I would really like to know.
So if enough knowledge allows us to create any amount of complex superposition, then the laws of physics are telling us that any measurement that we can currently perform using standard techniques (for example measurements of blood sugars, lengths of tables, colors of walls, etc.) can also be performed using counterfactual measurement.
But if we can make the same measurements in one reality as another, given enough knowledge, why do we have the right to say that one reality is real and the other is not?
Somehow I never examined these experiments and arguments. But what I’ve learned so far is to reject counterfactualism.
If you have an Everett camera in your Schrodinger cat-box which sometimes takes a picture of a dead cat, even when the cat later walks out of the box alive, then as a single-world theorist I should say the cat was dead when the photo was taken, and later came back to life. That may be a thermodynamic miracle, but that’s why I need to know exactly how your Everett camera is supposed to work. It may turn out that that it works so rarely that this is the reasonable explanation. Or it may be that you are controlling the microscopic conditions in the box so tightly – in order to preserve quantum coherence – that you are just directly putting the cat’s atoms back into the living arrangement yourself.
Such an experiment allegedly involves a superposition of histories, one of the form
|alive> → |alive> → |alive>
and the other
|alive> → |dead> → |alive>
And then the camera is supposed to have registered the existence of the |dead> component of the superposition during the intermediate state.
But how did that second history even happen? Either it happened by itself, in which case there was the thermodynamic miracle (dead cat spontaneously became live cat). Or, it was caused to happen, in which case you somehow made it happen! Either way, my counter-challenge would be: what’s the evidence that the cat was also alive at the time it was photographed in a dead state?
I think I see where we are disagreeing.
Consider a quantum computer. If the laws of physics say that only our lack of knowledge limits the amount of complexity in a superposition, and the logic of quantum computation suggests that greater complexity of superposition leads to exponentially increased computational capacity for certain types of computation, then it will be quite possible to have a quantum computer sit on a desktop and make more calculations per second than there are atoms in the universe. My quote above from David Deutsch makes that point. Only the limitations of our current knowledge prevent that.
When we have larger quantum computers, children will be programming universes with all the richness and diversity of our own, and no one will be arguing about the reality of the multiverse. If the capacity for superposition is virtually limitless, the exponential possibilities are virtually limitless. But so will be the capacity to measure “counterfactual” states that are more and more evolved, like dead cats with lower body temperatures. Why will the body temperature be lower? Why will the cat in that universe not (usually) be coming back to life?
As you state, because of the laws of thermodynamics. With greater knowledge on our part, the exponential increase in computational capacity of the quantum computer will parallel the exponential increase in our ability to measure states that are decohering from our own and are further evolved, using what you call the “Everett camera”. I say “decohering from” rather than “decoherent from” because there is never a time when these states are completely thermodynamically separated. And the state vector has unitary evolution. We would not expect it to go backwards any more than you would expect to see your own cat at home go from a dead to an alive state.
I am afraid that whether we use an Everett camera or one supplied to us by evolution (our neuropsychological apparatus) we are always interpreting reality through the lens of our theories. Often these theories are useful from an evolutionary perspective but nonetheless misleading. For example, we are likely to perceive that the world is flat, absent logic and experiment. It is equally easy to miss the existence of the multiverse because of the ruse of positivism. “I didn’t see the needle penetrate the skin in your quantum experiment. It didn’t or (even worse!) can’t happen.” But of course when we do this experiment with standard needles, we never truly see the needle go in, either.
I have enjoyed this discussion.
Certainly the default extrapolation is that quantum computers can efficiently perform some types of computation that would on a classical computer take more cycles than the number of atoms in the universe. But that’s not quite what you asserted.
Suppose I have a classical random access machine, that runs a given algorithm in time O(N), where the best equivalent algorithm for a classical 1D Turing machine takes O(N^2). Would you say that I really performed N^2 arithmetic ops, and theorize about where the extra calculation happened? Or would you say that the Turing machine isn’t a good model of the computational complexity class of classical physics?
I do subscribe to Everett, so I don’t object to your conclusion. But I don’t think exponential parallelism is a good description of quantum computation, even in the cases where you do get an exponential speedup.
Edit: I said that badly. I think I meant that the parallelism is not inferred from the class of problems you can solve, except insofar as the latter is evidence about the implementation method.
I do think exponential parallelism is a good description of QC, because any adequate causal model of a quantum computation will invoke an exponential number of nodes in the explanation of the computation’s output. Even if we can’t always take full advantage of the exponential number of calculations being performed, because of the readout problem, it is nonetheless only possible to explain quantum readouts in general by postulating that an exponential number of parallel calculations went on behind the scenes.
Here, of course, “causal model” is to be taken in the technical Pearl sense of the term, a directed acyclic graph of nodes each of whose values can be computed from its parent nodes plus a background factor of uncertainty that is uncorrelated to any other source of uncertainty, etc. I specify this to cut off any attempt to say something like “well, but those other worlds don’t exist until you measure them”. Any formal causal model that explains the quantum computation’s output will need an exponential number of nodes, since those nodes have real, causal effects on the final probability distribution over outputs.