I’m at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I’m still trying to wrap my head around what the interpretation of this event is in many-worlds. I know that it causes earth to pick which world it is in out of the possible worlds that split off when the photon was created, but I’m not sure if there is any event on the whole spherical wavefront.
It’s not a pure hypothetical- we are likely to see gravitational lens interferometry in our lifetime (if someone hasn’t achieved it yet outside of my attempt at literature review) which will either confirm that these considerations are real, or produce a shock result that they aren’t.
possible worlds that split off when the photon was created
I don’t think this is a very good way of thinking about what happens. I think worlds appear as fairly robust features of the wavefunction when quantum superpositions get entangled with large systems that differ in lots of degrees of freedom based on the state of the superposition.
So, when the intergalactic photon interacts non-trivially with a large system (e.g. Earth), a world becomes distinct in the wavefunction, because there’s a lump of amplitude that is separated from other lumps of amplitude by distance in many, many dimensions. This means it basically doesn’t interact with the rest of the wavefunction, and so looks like a distinct world.
Most reasoning about many worlds, by physicist fans of the interpretation, as well as by non-physicists, is done in a dismayingly vague way. If you want a many-worlds framework that meets physics standards of actual rigor, I recommend thinking in terms of the consistent or decoherent histories of Gell-Mann and Hartle (e.g.).
In ordinary quantum mechanics, to go from the wavefunction to reality, you first specify which “observable” (potentially real property) you’re interested in, and then in which possible values of that observable. E.g. the observable could be position and the values could be specific possible locations. In a “Hartle multiverse”, you think in terms of the history of the world, then specific observables at various times (or times + locations) in that history, then sets of possible values of those observables. You thereby get an ensemble of possible histories—all possible combinations of the possible values. The calculational side of the interpretation then gives you a probability for each possible history, given a particular wavefunction of the universe.
For physicists, the main selling point of this framework is that it allows you to do quantum cosmology, where you can’t separate the observer from the physical system under investigation. For me, it also has the advantage of being potentially relativistic, a chronic problem of less sophisticated approaches to many worlds, since spatially localized observables can be ordered in space-time rather than requiring an artificial universal time.
On the other hand, this framework doesn’t tell you how many “worlds” there are. That depends on the choice of observables. You can pick a single observable from one moment in the history of the universe (e.g. electromagnetic field strength at a certain space-time location), and use only that to define your possible worlds. That’s OK if you’re only interested in calculation, but if you’re interested in ontology as well (also known as “what’s actually there”), you may prefer some kind of “maximally refined” or “maximally fine-grained” set of histories, in which the possible worlds are defined by a set of observables and counterfactual properties that are as dense as possible while still being decoherent (e.g. without crowding so close as to violate the uncertainty principle). Investigation of maximally refined, decoherent multiverses could potentially lead to a new kind of ontological interpretation, but the topic is little investigated.
I’m at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I’m still trying to wrap my head around what the interpretation of this event is in many-worlds.
Under MWI, before the photon (a peak in EM field) could hit Earth, there were a lot of worlds differing by EM field values (“electromagnetic tensor”) - and, thus, with different photon directions, position, etc. Each of those worlds led to a variety of worlds; some, where light hit Earth, became somewhat different from those where light avoided it; so, integrated probability “photon is still on the way” decreases, while P(photon has been observed) increases. Whenever some probability mass of EM disturbances arrives, it is smoothly transformed, with no instant effects far away.
I’m at this point pretty confident that under the Copenhagen interpretation, whenever an intergalactic photon hits earth, the wave-function collapse takes place on a semi-spherical wave-front many millions of lightyears in diameter. I’m still trying to wrap my head around what the interpretation of this event is in many-worlds. I know that it causes earth to pick which world it is in out of the possible worlds that split off when the photon was created, but I’m not sure if there is any event on the whole spherical wavefront.
It’s not a pure hypothetical- we are likely to see gravitational lens interferometry in our lifetime (if someone hasn’t achieved it yet outside of my attempt at literature review) which will either confirm that these considerations are real, or produce a shock result that they aren’t.
I don’t think this is a very good way of thinking about what happens. I think worlds appear as fairly robust features of the wavefunction when quantum superpositions get entangled with large systems that differ in lots of degrees of freedom based on the state of the superposition.
So, when the intergalactic photon interacts non-trivially with a large system (e.g. Earth), a world becomes distinct in the wavefunction, because there’s a lump of amplitude that is separated from other lumps of amplitude by distance in many, many dimensions. This means it basically doesn’t interact with the rest of the wavefunction, and so looks like a distinct world.
Most reasoning about many worlds, by physicist fans of the interpretation, as well as by non-physicists, is done in a dismayingly vague way. If you want a many-worlds framework that meets physics standards of actual rigor, I recommend thinking in terms of the consistent or decoherent histories of Gell-Mann and Hartle (e.g.).
In ordinary quantum mechanics, to go from the wavefunction to reality, you first specify which “observable” (potentially real property) you’re interested in, and then in which possible values of that observable. E.g. the observable could be position and the values could be specific possible locations. In a “Hartle multiverse”, you think in terms of the history of the world, then specific observables at various times (or times + locations) in that history, then sets of possible values of those observables. You thereby get an ensemble of possible histories—all possible combinations of the possible values. The calculational side of the interpretation then gives you a probability for each possible history, given a particular wavefunction of the universe.
For physicists, the main selling point of this framework is that it allows you to do quantum cosmology, where you can’t separate the observer from the physical system under investigation. For me, it also has the advantage of being potentially relativistic, a chronic problem of less sophisticated approaches to many worlds, since spatially localized observables can be ordered in space-time rather than requiring an artificial universal time.
On the other hand, this framework doesn’t tell you how many “worlds” there are. That depends on the choice of observables. You can pick a single observable from one moment in the history of the universe (e.g. electromagnetic field strength at a certain space-time location), and use only that to define your possible worlds. That’s OK if you’re only interested in calculation, but if you’re interested in ontology as well (also known as “what’s actually there”), you may prefer some kind of “maximally refined” or “maximally fine-grained” set of histories, in which the possible worlds are defined by a set of observables and counterfactual properties that are as dense as possible while still being decoherent (e.g. without crowding so close as to violate the uncertainty principle). Investigation of maximally refined, decoherent multiverses could potentially lead to a new kind of ontological interpretation, but the topic is little investigated.
Under MWI, before the photon (a peak in EM field) could hit Earth, there were a lot of worlds differing by EM field values (“electromagnetic tensor”) - and, thus, with different photon directions, position, etc. Each of those worlds led to a variety of worlds; some, where light hit Earth, became somewhat different from those where light avoided it; so, integrated probability “photon is still on the way” decreases, while P(photon has been observed) increases. Whenever some probability mass of EM disturbances arrives, it is smoothly transformed, with no instant effects far away.