Well the beauty of the actual system is that you don’t need a “natural observer” until you have one. You calculate the time-evolution of the system with non-collapsing wave functions, then you collapse the wave function only when an observer finally shows up to make the observation.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
But consider the first natural observer, composed of matter. At what point do the wave functions associated with that matter collapse? Before or after its first observation?
With decoherence & MWI, this question presents no problem.
Irrespective of my below comment where i get more empathetic wiht the motivation for MWI, I do want to point out some of the reasons why I think MWI may be a “bridge too far” to solve any problems.
The universe as we know it has proven to be gigantically “conservative” in the sense of having a bunch of conservation laws that it simply never violates. Conservation of mass-energy being among the deepest and most powerful. In this universe, at this epoch, stuff is neither created nor destroyed: it is converted from one kind of stuff into another with strict conservation. Even particle pairs that arise from random vacuum fluctuations all soon “realize” if they are violating conservation of energy and disappear before you can say within the uncertainty principle that they were ever there.
So now we come along, have a subtle issue with wavefunction collapse and what really causes it and what does it all mean, and the solution is: the universe may be strictly conservative, but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
Yes, of course this COULD wind up being right and being the simplest. I await proposed experimental verifications, without them I can NEVER pick a non-conservative multiverse.
But thanks for making it clearer what some of the things that are gained are.
but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
That’s not how MWI works. These worlds are not being created. The wavefunction of the universe is being split up between them.
Are there more of these worlds now than there were 15 billion years ago?
If so, you can call it anything you want, but I vote that “created” is a pretty good term for explaining something that exists now that didn’t used to.
I failed to mention one major additional point. Decoherence and MWI also account for the observed fine-tuning of the universe to support life, including key details of the inflation hypothesis. The standard interpretation doesn’t.
As to conservation of mass-energy, this seems to be something that conflicts with your intuition that if there were decoherence, mass-energy would be divided up into the various branches and thus diminished in each branch. If you did accept the superiority of decoherence & MWI over the standard interpretation, you’d have to set this intuition aside.
You are free to select the version of Occam’s Razor that appeals to you. I like the one that chooses a complete explanation (that also explains fine-tuning) over an incomplete explanation that also requires an exogenous wave-collapse for the first natural observer.
Yes I see the motivation there. MWI may seem like an OK alternative to wave functions actually changing in the presence of consciousnesses, but not in their absence.
I guess I’ve never really believed the consciousness was an important part of it. Take for example setting up a two-slit experiment with a slow stream of electrons launched at it and a recording electron detector array beyond the slits. One would have a time series of locations the electrons hit the array which would probabilistically over time build a two-slit diffraction pattern, but in which each electron particle was identified with where on the array it ended up. Suppose you set this up in a room. In one version you send an observer in 3 days later to look at what happened, and you see all those evidences of each electron wavefunction collapse into a position eigenstate on the array detector. In the other version you don’t send someone in to the room until 10 years later, at that point the observer sees the historical record of all those wave function collapses. Finally there is a version where you never send someone in to the room.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen. The version of the Copenhagen interpretation used here seems to go with the idea that the entire room including the detector array and the equipment used to record its results exist in a quantum superposition of all the wavefunctions UNTIL the observer finally goes in to the room, at which point they collapse in to a state with all the data stored showing one allowed version of the historical record of electron collapses.
Very intriguing. There is literally no way in this experiment to distinguish between “the wave functions collapsed as the electron hit the detector, that I came along later to see the record of it isn’t what caused the collapse” and “all possible histories of that room were swirling around in a quantum superposition until I finally came along and my peaking in the door caused it to collapse on to one possible outcome.”
I’ve never fully understood Bell’s theorem and EPR, but I suppose I am stating a version of the same question. How do I design an experiment that distinguishes between that room being in a quantum superposition until I finally get there, and that room being a dry tomb of the records of wave functions that collapsed long ago? Bells theorem, if I am right and it applies here, says you can distinguish and that the room does exist in a quantum superposition until you get there, that there are certain subtle statistics of correlations that are true but not possible in a room which is merely the tomb of old dead wave functions.
I realize as I participate in this discussion that for me, the collapse has been a result of the wave function and the observing EQUIPMENT, not a function of my consciousness. It is possible that Bells theorem test results mean I am just wrong about this, but maybe not and bear with me.
I did work for years on the quantum treatment of one example of “measuring equipment,” a linear amplifier for radio waves. The quantum uncertainty in radio waves is manfested as “photon counting noise.” You think you have a pure sine wave, but it has some randomnesses due to quantum uncertainty, and a variety of bookkeeping methods for characterizing that uncertainty show it is equivalent to one-photon’s worth of noise power in the system even at absolute zero of temperature when all removable noise has been removed. THe linear amplifier, we see, amplifies those fluctuations so at the output of the amplifer those quantum fluctuations are now large, comprised of the energy of a million photons each (for an amplifier with gain one-million) and therefore, essentially, treatable as a classical phenomenon. My interpretation is: the linear amplifier collapsed the original input wavefunction, and turned its quantum noise into solidly instantiated classical noise.
But I have a feeling if I understood EPR and Bells theorem that I would see it is not the amplification that collapses it. Hmmm.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen.
Thanks for the link to Wheeler’s experiment. This experiment doesn’t address what I was addressing.
In wheeler’s experiment, the detector is put in place or taken away after the photons would have had to have passed through the slits. Even though the choice of detector (difraction pattern vs which slit) is made after the photons pass through, the photons are not “fooled” by this and behave at whichever detector is there when they get to it exactly as they should according to QM.
In my experiment, the detector and a recording device are locked in a room with no observer. The detector is never changed, the experiment just takes place without a human observer. It isn’t till sometime later that th room is opened and some human comes in to see the results of the experiment. The human does indeed see a time series of recorded electron hits on the detector which when summed up show the famous diffraction pattern. The question I address is:
Did the wavefunction for the whole room including the detection apparatus not collapse on to one of its allowed outcomes until the human finally went in to the room to see the result or
Did each electronic wavefunction collapse at the time the computer records the detector saw that electron?
My intuition has always been that it is not so much a consciousness seeing the result of the experiment that causes the collapse, as it is something about the apparatus for detecting the outcome of the experiment that causes the collapse. That the wavefunction is spread out across the detector array and then BAM it all gets sucked down somehow to only a single element of the detector array which is triggered.
In this view, wavefunction collapse is much more mechanical than in the way Copenhagen gets talked of around here.
Also, I think that whether the WF collapses when you go in the door, or whether pieces of it collapse each time an electron is recorded at the array are possibly experimentally indistinguishable. However, it may be that Bell’s Theorem EPR experiments do speak to this situation, that there would be experiments in closed rooms that could be done where an earlier collapse vs a later collapse when the observer finally arrives could be distinguished from some subtlety in how results are distributed (which is how I see EPR).
You have my condolences. I have waged this battle here for some time, without much success. If you press EY on the matter, all he says is something along the lines of “MWI is decoherence, decoherence is MWI”, which renders the MWI a redundant concept. Unfortunately, nearly all non-experts here fall into the password-guessing trap, while furiously denying it. Probably because the MWI seems so cute and even intuitive, even if bereft of substance.
Well the beauty of the actual system is that you don’t need a “natural observer” until you have one. You calculate the time-evolution of the system with non-collapsing wave functions, then you collapse the wave function only when an observer finally shows up to make the observation.
It doesn’t matter if the wave functions were waiting billions of years to finally be collapsed, you are not missing anything by not having collapses before you have natural observers.
meaning the standard QM interpretation?
But consider the first natural observer, composed of matter. At what point do the wave functions associated with that matter collapse? Before or after its first observation?
With decoherence & MWI, this question presents no problem.
Irrespective of my below comment where i get more empathetic wiht the motivation for MWI, I do want to point out some of the reasons why I think MWI may be a “bridge too far” to solve any problems.
The universe as we know it has proven to be gigantically “conservative” in the sense of having a bunch of conservation laws that it simply never violates. Conservation of mass-energy being among the deepest and most powerful. In this universe, at this epoch, stuff is neither created nor destroyed: it is converted from one kind of stuff into another with strict conservation. Even particle pairs that arise from random vacuum fluctuations all soon “realize” if they are violating conservation of energy and disappear before you can say within the uncertainty principle that they were ever there.
So now we come along, have a subtle issue with wavefunction collapse and what really causes it and what does it all mean, and the solution is: the universe may be strictly conservative, but the multiverse is growing in total mass and energy about as fast as any growth fuction that you can conceive, and THAT is what makes the direction of time so strong?
Yes, of course this COULD wind up being right and being the simplest. I await proposed experimental verifications, without them I can NEVER pick a non-conservative multiverse.
But thanks for making it clearer what some of the things that are gained are.
That’s not how MWI works. These worlds are not being created. The wavefunction of the universe is being split up between them.
Are there more of these worlds now than there were 15 billion years ago?
If so, you can call it anything you want, but I vote that “created” is a pretty good term for explaining something that exists now that didn’t used to.
I failed to mention one major additional point. Decoherence and MWI also account for the observed fine-tuning of the universe to support life, including key details of the inflation hypothesis. The standard interpretation doesn’t.
As to conservation of mass-energy, this seems to be something that conflicts with your intuition that if there were decoherence, mass-energy would be divided up into the various branches and thus diminished in each branch. If you did accept the superiority of decoherence & MWI over the standard interpretation, you’d have to set this intuition aside.
You are free to select the version of Occam’s Razor that appeals to you. I like the one that chooses a complete explanation (that also explains fine-tuning) over an incomplete explanation that also requires an exogenous wave-collapse for the first natural observer.
Yes I see the motivation there. MWI may seem like an OK alternative to wave functions actually changing in the presence of consciousnesses, but not in their absence.
I guess I’ve never really believed the consciousness was an important part of it. Take for example setting up a two-slit experiment with a slow stream of electrons launched at it and a recording electron detector array beyond the slits. One would have a time series of locations the electrons hit the array which would probabilistically over time build a two-slit diffraction pattern, but in which each electron particle was identified with where on the array it ended up. Suppose you set this up in a room. In one version you send an observer in 3 days later to look at what happened, and you see all those evidences of each electron wavefunction collapse into a position eigenstate on the array detector. In the other version you don’t send someone in to the room until 10 years later, at that point the observer sees the historical record of all those wave function collapses. Finally there is a version where you never send someone in to the room.
It has always been my opinion that whatever collapses happened actually happened at the time recorded for each electron wave function hitting the screen. The version of the Copenhagen interpretation used here seems to go with the idea that the entire room including the detector array and the equipment used to record its results exist in a quantum superposition of all the wavefunctions UNTIL the observer finally goes in to the room, at which point they collapse in to a state with all the data stored showing one allowed version of the historical record of electron collapses.
Very intriguing. There is literally no way in this experiment to distinguish between “the wave functions collapsed as the electron hit the detector, that I came along later to see the record of it isn’t what caused the collapse” and “all possible histories of that room were swirling around in a quantum superposition until I finally came along and my peaking in the door caused it to collapse on to one possible outcome.”
I’ve never fully understood Bell’s theorem and EPR, but I suppose I am stating a version of the same question. How do I design an experiment that distinguishes between that room being in a quantum superposition until I finally get there, and that room being a dry tomb of the records of wave functions that collapsed long ago? Bells theorem, if I am right and it applies here, says you can distinguish and that the room does exist in a quantum superposition until you get there, that there are certain subtle statistics of correlations that are true but not possible in a room which is merely the tomb of old dead wave functions.
I realize as I participate in this discussion that for me, the collapse has been a result of the wave function and the observing EQUIPMENT, not a function of my consciousness. It is possible that Bells theorem test results mean I am just wrong about this, but maybe not and bear with me.
I did work for years on the quantum treatment of one example of “measuring equipment,” a linear amplifier for radio waves. The quantum uncertainty in radio waves is manfested as “photon counting noise.” You think you have a pure sine wave, but it has some randomnesses due to quantum uncertainty, and a variety of bookkeeping methods for characterizing that uncertainty show it is equivalent to one-photon’s worth of noise power in the system even at absolute zero of temperature when all removable noise has been removed. THe linear amplifier, we see, amplifies those fluctuations so at the output of the amplifer those quantum fluctuations are now large, comprised of the energy of a million photons each (for an amplifier with gain one-million) and therefore, essentially, treatable as a classical phenomenon. My interpretation is: the linear amplifier collapsed the original input wavefunction, and turned its quantum noise into solidly instantiated classical noise.
But I have a feeling if I understood EPR and Bells theorem that I would see it is not the amplification that collapses it. Hmmm.
This is the hypothesis that was tested, and failed, in the 2007 implementation of Wheeler’s delayed choice experiment.
Thanks for the link to Wheeler’s experiment. This experiment doesn’t address what I was addressing.
In wheeler’s experiment, the detector is put in place or taken away after the photons would have had to have passed through the slits. Even though the choice of detector (difraction pattern vs which slit) is made after the photons pass through, the photons are not “fooled” by this and behave at whichever detector is there when they get to it exactly as they should according to QM.
In my experiment, the detector and a recording device are locked in a room with no observer. The detector is never changed, the experiment just takes place without a human observer. It isn’t till sometime later that th room is opened and some human comes in to see the results of the experiment. The human does indeed see a time series of recorded electron hits on the detector which when summed up show the famous diffraction pattern. The question I address is:
Did the wavefunction for the whole room including the detection apparatus not collapse on to one of its allowed outcomes until the human finally went in to the room to see the result or
Did each electronic wavefunction collapse at the time the computer records the detector saw that electron?
My intuition has always been that it is not so much a consciousness seeing the result of the experiment that causes the collapse, as it is something about the apparatus for detecting the outcome of the experiment that causes the collapse. That the wavefunction is spread out across the detector array and then BAM it all gets sucked down somehow to only a single element of the detector array which is triggered.
In this view, wavefunction collapse is much more mechanical than in the way Copenhagen gets talked of around here.
Also, I think that whether the WF collapses when you go in the door, or whether pieces of it collapse each time an electron is recorded at the array are possibly experimentally indistinguishable. However, it may be that Bell’s Theorem EPR experiments do speak to this situation, that there would be experiments in closed rooms that could be done where an earlier collapse vs a later collapse when the observer finally arrives could be distinguished from some subtlety in how results are distributed (which is how I see EPR).
You have my condolences. I have waged this battle here for some time, without much success. If you press EY on the matter, all he says is something along the lines of “MWI is decoherence, decoherence is MWI”, which renders the MWI a redundant concept. Unfortunately, nearly all non-experts here fall into the password-guessing trap, while furiously denying it. Probably because the MWI seems so cute and even intuitive, even if bereft of substance.