In fact there are many continuous outcomes in quantum mechanics. The overved velocity or momentum of an electron or any other mass. In fact the number of stated problems where the results are broken down in to discrete states is small compared to the ones with continua. In fact, the heisenberg cat example, there are not just two states of outcome, either alive or dead. Rather there are a myriad of state outcomes where the cat is alive, and another myriad of outcomes where the cat is dead.
I personally think the many worlds hypothesis is ludicrous, failing Occams Razor by such an astonishing margin that it might as well just grow a long beard. Admittedly, I have not read any respected physicists argument for it (or at least I didn’t respect the arguments I did already read). I think the many worlds hypothesis is just a very dopey hack for people who have decided ahead of time that the universe “just HAS to be” deterministic. I say if you observe indeterminism in the universe, then it is your theory that is broken, not the universe.
I’m glad you gave me that link, thanks. I had seen that article a while ago and it was good to refresh.
The problem with the Many Worlds Hypothesis is that it solves no problem, in my opinion.
In simplest terms, the problem with the “standard” interpretation is that you have probabilistic outcomes from some experiments. Some people hate that.
The “solution” offered by MWI is that at every instance where a wave function would have collapsed to some probabilistically determined valuewe have the universe split into as mny choices as there are possible wave function collapses, even when that number is infinity because the wave function collapses into a continuum of position or velocity values. And this happens over and over and over so you have an efflorescense of zillions and 3^^^^^^3 and googleplexes of universes.
What problem did this solve? Oh yeah, the arbitrariness of the probabilistic result predicted in quantum mechanics.
What problem did it not solve? That I wound up in this particular universe, no physics to explain that. “Oh, but you wound up in ALL the universes!” I got cloned a zillionplex times with the universe. So did everybody else. In fact you got cloned a zillionplex times when a wave function collapsed because I observed something, and then your zillionplex clones got cloned another zillionplex times when you observed something, and then a zillionplex more splittings of each of you for each wavefunction collapse for each of the billions of humans on the planet. Do ants have enough “observer status” to collapse wave functions? Probably, if not ants than certainly frogs.
So we wind up with a near meaningless explosion of universes that is constantly going on, and indeed is in important senses accelerating constantly.
And of course every one of these overwhelmingly zillionplexes of zillionplexes of universes is unobservable to us, will never have the slightest affect on us, once they split off from us!
And of course every one of these overwhelmingly zillionplexes of zillionplexes of universes is unobservable to us, will never have the slightest affect on us, once they split off from us!
Its almost as though they don’t exist!
Except things like quantum computers. It’s almost like those worlds do exist and that we can even use their transistors to parallel process stuff.
However, in 1985 David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation.[69] The experiments require macroscopic quantum state preparation and quantum erasure by a hypothetical quantum computer which is currently outside experimental possibility.
If a quantum computer could correctly be characterized as a computer which utilized the transistors in other branches of the multiverse to speed up calculations in this one, then it would merely require the operation of any quantum computer at all to provide strong evidence for the multiverse. However, the article states that a test for MWI requires a particular special operation of a particular special quantum computer, that the multiverse is not a conclusion we reach merely by seeing a quantum computer work.
Sorry I didn’t make that connection clearer before.
I’m pretty confident that that paper is in error. Or rather, it assumes that the Copenhagen Interpretation is implemented so that it deviates from pure Quantum Mechanics in a particular, testable, way (or category of ways) - and that renders his version of CI distinguishable from MWI, and less useful for quantum computing. When I get academic library access again, I’ll take a closer look at it.
Upon returning and rereading… no. Branches in MWI aren’t said to have ‘split off’ until they are mutually decoherent. That renders them unsuitable for quantum computing.
The chief virtue of decoherence and MWI (as I understand it) is that it avoids ‘the measurement problem’ or the necessity of a natural observer. If you go back far enough in the history of the universe, there were no natural observers.
Well the beauty of the actual system is that you don’t need a “natural observer” until you have one. You calculate the time-evolution of the system with non-collapsing wave functions, then you collapse the wave function only when an observer finally shows up to make the observation.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
But consider the first natural observer, composed of matter. At what point do the wave functions associated with that matter collapse? Before or after its first observation?
With decoherence & MWI, this question presents no problem.
Irrespective of my below comment where i get more empathetic wiht the motivation for MWI, I do want to point out some of the reasons why I think MWI may be a “bridge too far” to solve any problems.
The universe as we know it has proven to be gigantically “conservative” in the sense of having a bunch of conservation laws that it simply never violates. Conservation of mass-energy being among the deepest and most powerful. In this universe, at this epoch, stuff is neither created nor destroyed: it is converted from one kind of stuff into another with strict conservation. Even particle pairs that arise from random vacuum fluctuations all soon “realize” if they are violating conservation of energy and disappear before you can say within the uncertainty principle that they were ever there.
So now we come along, have a subtle issue with wavefunction collapse and what really causes it and what does it all mean, and the solution is: the universe may be strictly conservative, but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
Yes, of course this COULD wind up being right and being the simplest. I await proposed experimental verifications, without them I can NEVER pick a non-conservative multiverse.
But thanks for making it clearer what some of the things that are gained are.
but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
That’s not how MWI works. These worlds are not being created. The wavefunction of the universe is being split up between them.
Are there more of these worlds now than there were 15 billion years ago?
If so, you can call it anything you want, but I vote that “created” is a pretty good term for explaining something that exists now that didn’t used to.
I failed to mention one major additional point. Decoherence and MWI also account for the observed fine-tuning of the universe to support life, including key details of the inflation hypothesis. The standard interpretation doesn’t.
As to conservation of mass-energy, this seems to be something that conflicts with your intuition that if there were decoherence, mass-energy would be divided up into the various branches and thus diminished in each branch. If you did accept the superiority of decoherence & MWI over the standard interpretation, you’d have to set this intuition aside.
You are free to select the version of Occam’s Razor that appeals to you. I like the one that chooses a complete explanation (that also explains fine-tuning) over an incomplete explanation that also requires an exogenous wave-collapse for the first natural observer.
Yes I see the motivation there. MWI may seem like an OK alternative to wave functions actually changing in the presence of consciousnesses, but not in their absence.
I guess I’ve never really believed the consciousness was an important part of it. Take for example setting up a two-slit experiment with a slow stream of electrons launched at it and a recording electron detector array beyond the slits. One would have a time series of locations the electrons hit the array which would probabilistically over time build a two-slit diffraction pattern, but in which each electron particle was identified with where on the array it ended up. Suppose you set this up in a room. In one version you send an observer in 3 days later to look at what happened, and you see all those evidences of each electron wavefunction collapse into a position eigenstate on the array detector. In the other version you don’t send someone in to the room until 10 years later, at that point the observer sees the historical record of all those wave function collapses. Finally there is a version where you never send someone in to the room.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen. The version of the Copenhagen interpretation used here seems to go with the idea that the entire room including the detector array and the equipment used to record its results exist in a quantum superposition of all the wavefunctions UNTIL the observer finally goes in to the room, at which point they collapse in to a state with all the data stored showing one allowed version of the historical record of electron collapses.
Very intriguing. There is literally no way in this experiment to distinguish between “the wave functions collapsed as the electron hit the detector, that I came along later to see the record of it isn’t what caused the collapse” and “all possible histories of that room were swirling around in a quantum superposition until I finally came along and my peaking in the door caused it to collapse on to one possible outcome.”
I’ve never fully understood Bell’s theorem and EPR, but I suppose I am stating a version of the same question. How do I design an experiment that distinguishes between that room being in a quantum superposition until I finally get there, and that room being a dry tomb of the records of wave functions that collapsed long ago? Bells theorem, if I am right and it applies here, says you can distinguish and that the room does exist in a quantum superposition until you get there, that there are certain subtle statistics of correlations that are true but not possible in a room which is merely the tomb of old dead wave functions.
I realize as I participate in this discussion that for me, the collapse has been a result of the wave function and the observing EQUIPMENT, not a function of my consciousness. It is possible that Bells theorem test results mean I am just wrong about this, but maybe not and bear with me.
I did work for years on the quantum treatment of one example of “measuring equipment,” a linear amplifier for radio waves. The quantum uncertainty in radio waves is manfested as “photon counting noise.” You think you have a pure sine wave, but it has some randomnesses due to quantum uncertainty, and a variety of bookkeeping methods for characterizing that uncertainty show it is equivalent to one-photon’s worth of noise power in the system even at absolute zero of temperature when all removable noise has been removed. THe linear amplifier, we see, amplifies those fluctuations so at the output of the amplifer those quantum fluctuations are now large, comprised of the energy of a million photons each (for an amplifier with gain one-million) and therefore, essentially, treatable as a classical phenomenon. My interpretation is: the linear amplifier collapsed the original input wavefunction, and turned its quantum noise into solidly instantiated classical noise.
But I have a feeling if I understood EPR and Bells theorem that I would see it is not the amplification that collapses it. Hmmm.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen.
Thanks for the link to Wheeler’s experiment. This experiment doesn’t address what I was addressing.
In wheeler’s experiment, the detector is put in place or taken away after the photons would have had to have passed through the slits. Even though the choice of detector (difraction pattern vs which slit) is made after the photons pass through, the photons are not “fooled” by this and behave at whichever detector is there when they get to it exactly as they should according to QM.
In my experiment, the detector and a recording device are locked in a room with no observer. The detector is never changed, the experiment just takes place without a human observer. It isn’t till sometime later that th room is opened and some human comes in to see the results of the experiment. The human does indeed see a time series of recorded electron hits on the detector which when summed up show the famous diffraction pattern. The question I address is:
Did the wavefunction for the whole room including the detection apparatus not collapse on to one of its allowed outcomes until the human finally went in to the room to see the result or
Did each electronic wavefunction collapse at the time the computer records the detector saw that electron?
My intuition has always been that it is not so much a consciousness seeing the result of the experiment that causes the collapse, as it is something about the apparatus for detecting the outcome of the experiment that causes the collapse. That the wavefunction is spread out across the detector array and then BAM it all gets sucked down somehow to only a single element of the detector array which is triggered.
In this view, wavefunction collapse is much more mechanical than in the way Copenhagen gets talked of around here.
Also, I think that whether the WF collapses when you go in the door, or whether pieces of it collapse each time an electron is recorded at the array are possibly experimentally indistinguishable. However, it may be that Bell’s Theorem EPR experiments do speak to this situation, that there would be experiments in closed rooms that could be done where an earlier collapse vs a later collapse when the observer finally arrives could be distinguished from some subtlety in how results are distributed (which is how I see EPR).
You have my condolences. I have waged this battle here for some time, without much success. If you press EY on the matter, all he says is something along the lines of “MWI is decoherence, decoherence is MWI”, which renders the MWI a redundant concept. Unfortunately, nearly all non-experts here fall into the password-guessing trap, while furiously denying it. Probably because the MWI seems so cute and even intuitive, even if bereft of substance.
Indeed. Thank you for making the points you did in the first paragraph; that’s more or less what I was making note of (in perhaps too-general terms). I was going to respond to another post that falsely contended that just because there are two states that that necessarily exhausts all of the possibilities that obtain (as ens rationis), since the state of the cat as such is not discrete, but also continuous.
I would not be so quick to dismiss MW on account of the heuristic value of the idea of multiverses (and the successive hierarchy of universes), because rationality cannot be used to dismiss the preeminent possibility of any possibility. Anyway, there’s a pretty interesting article on arXiv by R. Vaas about it: http://arxiv.org/abs/1001.0726 .
In fact there are many continuous outcomes in quantum mechanics. The overved velocity or momentum of an electron or any other mass. In fact the number of stated problems where the results are broken down in to discrete states is small compared to the ones with continua. In fact, the heisenberg cat example, there are not just two states of outcome, either alive or dead. Rather there are a myriad of state outcomes where the cat is alive, and another myriad of outcomes where the cat is dead.
I personally think the many worlds hypothesis is ludicrous, failing Occams Razor by such an astonishing margin that it might as well just grow a long beard. Admittedly, I have not read any respected physicists argument for it (or at least I didn’t respect the arguments I did already read). I think the many worlds hypothesis is just a very dopey hack for people who have decided ahead of time that the universe “just HAS to be” deterministic. I say if you observe indeterminism in the universe, then it is your theory that is broken, not the universe.
You don’t agree that Decoherence is Simple? (Not that I’m qualified to have any opinion on the matter.)
I’m glad you gave me that link, thanks. I had seen that article a while ago and it was good to refresh.
The problem with the Many Worlds Hypothesis is that it solves no problem, in my opinion.
In simplest terms, the problem with the “standard” interpretation is that you have probabilistic outcomes from some experiments. Some people hate that.
The “solution” offered by MWI is that at every instance where a wave function would have collapsed to some probabilistically determined valuewe have the universe split into as mny choices as there are possible wave function collapses, even when that number is infinity because the wave function collapses into a continuum of position or velocity values. And this happens over and over and over so you have an efflorescense of zillions and 3^^^^^^3 and googleplexes of universes.
What problem did this solve? Oh yeah, the arbitrariness of the probabilistic result predicted in quantum mechanics.
What problem did it not solve? That I wound up in this particular universe, no physics to explain that. “Oh, but you wound up in ALL the universes!” I got cloned a zillionplex times with the universe. So did everybody else. In fact you got cloned a zillionplex times when a wave function collapsed because I observed something, and then your zillionplex clones got cloned another zillionplex times when you observed something, and then a zillionplex more splittings of each of you for each wavefunction collapse for each of the billions of humans on the planet. Do ants have enough “observer status” to collapse wave functions? Probably, if not ants than certainly frogs.
So we wind up with a near meaningless explosion of universes that is constantly going on, and indeed is in important senses accelerating constantly.
And of course every one of these overwhelmingly zillionplexes of zillionplexes of universes is unobservable to us, will never have the slightest affect on us, once they split off from us!
Its almost as though they don’t exist!
Except things like quantum computers. It’s almost like those worlds do exist and that we can even use their transistors to parallel process stuff.
Well that is definitely a fun thing to say. It doesn’t seem to be consistent with what is currently thought about quantum computers and Many Worlds, though.
Your link to the wikipedia article on the MWI does not clarify your objection to the statement made above.
The wikipedia article states
If a quantum computer could correctly be characterized as a computer which utilized the transistors in other branches of the multiverse to speed up calculations in this one, then it would merely require the operation of any quantum computer at all to provide strong evidence for the multiverse. However, the article states that a test for MWI requires a particular special operation of a particular special quantum computer, that the multiverse is not a conclusion we reach merely by seeing a quantum computer work.
Sorry I didn’t make that connection clearer before.
I’m pretty confident that that paper is in error. Or rather, it assumes that the Copenhagen Interpretation is implemented so that it deviates from pure Quantum Mechanics in a particular, testable, way (or category of ways) - and that renders his version of CI distinguishable from MWI, and less useful for quantum computing. When I get academic library access again, I’ll take a closer look at it.
Upon returning and rereading… no. Branches in MWI aren’t said to have ‘split off’ until they are mutually decoherent. That renders them unsuitable for quantum computing.
The chief virtue of decoherence and MWI (as I understand it) is that it avoids ‘the measurement problem’ or the necessity of a natural observer. If you go back far enough in the history of the universe, there were no natural observers.
Well the beauty of the actual system is that you don’t need a “natural observer” until you have one. You calculate the time-evolution of the system with non-collapsing wave functions, then you collapse the wave function only when an observer finally shows up to make the observation.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
meaning the standard QM interpretation?
But consider the first natural observer, composed of matter. At what point do the wave functions associated with that matter collapse? Before or after its first observation?
With decoherence & MWI, this question presents no problem.
Irrespective of my below comment where i get more empathetic wiht the motivation for MWI, I do want to point out some of the reasons why I think MWI may be a “bridge too far” to solve any problems.
The universe as we know it has proven to be gigantically “conservative” in the sense of having a bunch of conservation laws that it simply never violates. Conservation of mass-energy being among the deepest and most powerful. In this universe, at this epoch, stuff is neither created nor destroyed: it is converted from one kind of stuff into another with strict conservation. Even particle pairs that arise from random vacuum fluctuations all soon “realize” if they are violating conservation of energy and disappear before you can say within the uncertainty principle that they were ever there.
So now we come along, have a subtle issue with wavefunction collapse and what really causes it and what does it all mean, and the solution is: the universe may be strictly conservative, but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
Yes, of course this COULD wind up being right and being the simplest. I await proposed experimental verifications, without them I can NEVER pick a non-conservative multiverse.
But thanks for making it clearer what some of the things that are gained are.
That’s not how MWI works. These worlds are not being created. The wavefunction of the universe is being split up between them.
Are there more of these worlds now than there were 15 billion years ago?
If so, you can call it anything you want, but I vote that “created” is a pretty good term for explaining something that exists now that didn’t used to.
I failed to mention one major additional point. Decoherence and MWI also account for the observed fine-tuning of the universe to support life, including key details of the inflation hypothesis. The standard interpretation doesn’t.
As to conservation of mass-energy, this seems to be something that conflicts with your intuition that if there were decoherence, mass-energy would be divided up into the various branches and thus diminished in each branch. If you did accept the superiority of decoherence & MWI over the standard interpretation, you’d have to set this intuition aside.
You are free to select the version of Occam’s Razor that appeals to you. I like the one that chooses a complete explanation (that also explains fine-tuning) over an incomplete explanation that also requires an exogenous wave-collapse for the first natural observer.
Yes I see the motivation there. MWI may seem like an OK alternative to wave functions actually changing in the presence of consciousnesses, but not in their absence.
I guess I’ve never really believed the consciousness was an important part of it. Take for example setting up a two-slit experiment with a slow stream of electrons launched at it and a recording electron detector array beyond the slits. One would have a time series of locations the electrons hit the array which would probabilistically over time build a two-slit diffraction pattern, but in which each electron particle was identified with where on the array it ended up. Suppose you set this up in a room. In one version you send an observer in 3 days later to look at what happened, and you see all those evidences of each electron wavefunction collapse into a position eigenstate on the array detector. In the other version you don’t send someone in to the room until 10 years later, at that point the observer sees the historical record of all those wave function collapses. Finally there is a version where you never send someone in to the room.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen. The version of the Copenhagen interpretation used here seems to go with the idea that the entire room including the detector array and the equipment used to record its results exist in a quantum superposition of all the wavefunctions UNTIL the observer finally goes in to the room, at which point they collapse in to a state with all the data stored showing one allowed version of the historical record of electron collapses.
Very intriguing. There is literally no way in this experiment to distinguish between “the wave functions collapsed as the electron hit the detector, that I came along later to see the record of it isn’t what caused the collapse” and “all possible histories of that room were swirling around in a quantum superposition until I finally came along and my peaking in the door caused it to collapse on to one possible outcome.”
I’ve never fully understood Bell’s theorem and EPR, but I suppose I am stating a version of the same question. How do I design an experiment that distinguishes between that room being in a quantum superposition until I finally get there, and that room being a dry tomb of the records of wave functions that collapsed long ago? Bells theorem, if I am right and it applies here, says you can distinguish and that the room does exist in a quantum superposition until you get there, that there are certain subtle statistics of correlations that are true but not possible in a room which is merely the tomb of old dead wave functions.
I realize as I participate in this discussion that for me, the collapse has been a result of the wave function and the observing EQUIPMENT, not a function of my consciousness. It is possible that Bells theorem test results mean I am just wrong about this, but maybe not and bear with me.
I did work for years on the quantum treatment of one example of “measuring equipment,” a linear amplifier for radio waves. The quantum uncertainty in radio waves is manfested as “photon counting noise.” You think you have a pure sine wave, but it has some randomnesses due to quantum uncertainty, and a variety of bookkeeping methods for characterizing that uncertainty show it is equivalent to one-photon’s worth of noise power in the system even at absolute zero of temperature when all removable noise has been removed. THe linear amplifier, we see, amplifies those fluctuations so at the output of the amplifer those quantum fluctuations are now large, comprised of the energy of a million photons each (for an amplifier with gain one-million) and therefore, essentially, treatable as a classical phenomenon. My interpretation is: the linear amplifier collapsed the original input wavefunction, and turned its quantum noise into solidly instantiated classical noise.
But I have a feeling if I understood EPR and Bells theorem that I would see it is not the amplification that collapses it. Hmmm.
This is the hypothesis that was tested, and failed, in the 2007 implementation of Wheeler’s delayed choice experiment.
Thanks for the link to Wheeler’s experiment. This experiment doesn’t address what I was addressing.
In wheeler’s experiment, the detector is put in place or taken away after the photons would have had to have passed through the slits. Even though the choice of detector (difraction pattern vs which slit) is made after the photons pass through, the photons are not “fooled” by this and behave at whichever detector is there when they get to it exactly as they should according to QM.
In my experiment, the detector and a recording device are locked in a room with no observer. The detector is never changed, the experiment just takes place without a human observer. It isn’t till sometime later that th room is opened and some human comes in to see the results of the experiment. The human does indeed see a time series of recorded electron hits on the detector which when summed up show the famous diffraction pattern. The question I address is:
Did the wavefunction for the whole room including the detection apparatus not collapse on to one of its allowed outcomes until the human finally went in to the room to see the result or
Did each electronic wavefunction collapse at the time the computer records the detector saw that electron?
My intuition has always been that it is not so much a consciousness seeing the result of the experiment that causes the collapse, as it is something about the apparatus for detecting the outcome of the experiment that causes the collapse. That the wavefunction is spread out across the detector array and then BAM it all gets sucked down somehow to only a single element of the detector array which is triggered.
In this view, wavefunction collapse is much more mechanical than in the way Copenhagen gets talked of around here.
Also, I think that whether the WF collapses when you go in the door, or whether pieces of it collapse each time an electron is recorded at the array are possibly experimentally indistinguishable. However, it may be that Bell’s Theorem EPR experiments do speak to this situation, that there would be experiments in closed rooms that could be done where an earlier collapse vs a later collapse when the observer finally arrives could be distinguished from some subtlety in how results are distributed (which is how I see EPR).
You have my condolences. I have waged this battle here for some time, without much success. If you press EY on the matter, all he says is something along the lines of “MWI is decoherence, decoherence is MWI”, which renders the MWI a redundant concept. Unfortunately, nearly all non-experts here fall into the password-guessing trap, while furiously denying it. Probably because the MWI seems so cute and even intuitive, even if bereft of substance.
Indeed. Thank you for making the points you did in the first paragraph; that’s more or less what I was making note of (in perhaps too-general terms). I was going to respond to another post that falsely contended that just because there are two states that that necessarily exhausts all of the possibilities that obtain (as ens rationis), since the state of the cat as such is not discrete, but also continuous.
I would not be so quick to dismiss MW on account of the heuristic value of the idea of multiverses (and the successive hierarchy of universes), because rationality cannot be used to dismiss the preeminent possibility of any possibility. Anyway, there’s a pretty interesting article on arXiv by R. Vaas about it: http://arxiv.org/abs/1001.0726 .