The multiverse interpretation takes the wavefunction literally and says that since the math describes a multiverse, there is a multiverse.
YMMV about how literally you take the math. I’ve come to have a technical objection to it such that I’d be inclined to say that the multiverse theory is wrong, but also it is very technical and I think a substantial fraction of multiverse theorists would say “yeah that’s what I meant” or “I suppose that’s plausible too”.
But “take the math literally” sure seems like good reason/evidence.
And when it comes to pilot wave theory, its math also postulates a wavefunction, so if you take the math literally for pilot wave theory, you get the Everettian multiverse; you just additionally declare one of the branches Real in a vague sense.
Gonna post a top-level post about it once it’s made it through editing, but basically the wavefunction is a way to embed a quantum system in a deterministic system, very closely analogous to how a probability function allows you to embed a stochastic system into a deterministic system. So just like how taking the math literally for QM means believing that you live in a multiverse, taking the math literally for probability also means believing that you live in a multiverse. But it seems philosophically coherent for me to believe that we live in a truly stochastic universe rather than just a deterministic probability multiverse, so it also feels like it should be philosophically coherent that we live in a truly quantum universe.
Before I answer that question: do you know what I mean by a truly stochastic universe? If so, how would you explain the concept of true ontologically fundamental stochasticity to a mind that does not know what it means?
I think by “truly stochastic” you mean that multiple future outcomes are possible, rather than one inevitable outcome. You don’t merely mean “it’s absolutely physically impossible to take the necessary measurements to predict things” or “a coin flip is pretty much random for all intents & purposes”. That’s my guess.
Kind of, because “multiple future outcomes are possible, rather than one inevitable outcome” could sort of be said to apply to both true stochasticity and true quantum mechanics. With true stochasticity, it has to evolve by a diffusion-like process with no destructive interference, whereas for true quantum mechanics, it has to evolve by a unitary-like process with no information loss.
So to a mind that can comprehend probability distributions, but intuitively thinks they always describe hidden variables or frequencies or whatever, how does one express true stochasticity, the notion where a probability distribution of future outcomes are possible (even if one knew all the information that currently exists), but only one of them happens?
I’ve been arguing before that true randomness cannot be formalized, and therefore Kolmogorov Complexity(stochastic universe) = ∞. But ofc then the out-of-model uncertainty dominates the calculation, mb one needs a measure with a randomness primitive. (If someone thinks they can explain randomness in terms of other concepts, I also wanna see it.)
The math doesn’t describe a multiverse, in the sense that if you solve the Schrödinger equation for the universe, you get some structure with clearly separated decoherent branches, every time. You need additional assumptions, which have their own complexity cost.
In fact, MWI is usually argued from models of a few particles. These can show coherent superposition, but to spin an MW theory worthy of the name out of that, you need superpositions that can be maintained at large scale, and also decohere into non interacting branches , preferably by a mechanism that can be found entirely in the standard formalism.
I’m confused about what you’re saying. In particular while I know what “decoherence” means, it sounds like you are talking about some special formal thing when you say “decoherent branches”.
Let’s consider the case of Schrodinger’s cat. Surely the math itself says that when you open the box, you end up in a superposition of |see the cat alive> + |see the cat dead>.
Or from a comp sci PoV, I imagine having some initial bit sequence, |0101010001100010>, and then applying a Hadamard gate to end up with a superposition (sqrt(1/2) |0> + sqrt(1/2) |1>) (x) |101010001100010>. Next I imagine a bunch of CNOTs that mix together this bit in superposition with the other bits, making the superpositions very distant from each other and therefore unlikely to interact.
Surely the math itself says that when you open the box, you end up in a superposition of |see the cat alive> + |see the cat dead>.
In a classical basis. But you could rewrite the superposition in other bases that we dont observe. That’s one problem.
As Penrose writes (Road to Reality 29.8) “Why do we not
permit these superposed perception states? Until we know exactly what it
is about a quantum state that allows it to be considered as a ‘perception’,
and consequently see that such superpositions are ‘not allowed’, we have
really got nowhere in explaining why the real world of our experiences
cannot involve superpositions of live and dead cats.” Penrose gives the example of 2|psi>1⁄4 {jperceiving live cati þ jperceiving dead cati} {jlive cati þ j dead cati}
þ {jperceiving live cati jperceiving dead cati} {jlive cati jdead cati}
as an example of a surreal, non-classical superposition.
And/or, we have no reason to believe, given only the formalism itself , that the two versions of the observer will be unaware of each other, and able to report unambiguously on their individual observations. That’s the point of the de/coherence distinction. In the Everett theory, everything that starts in a coherent superposition, stays in one.
“According to Everett’s pure wave mechanics, when our observer makes a measurement of the electron he does not cause a collapse, but instead becomes correlated with the electron. What this means is that where once we had a system that consisted of just an electron, there is now a system that consists of the electron and the observer. The mathematical equation that describes the state of the new system has one summand in which the electron is z-spin up and the observer measured “z-spin up” and another in which the electron is z-spin down and the observer measured “z-spin down.” In both summands our observer got a determinate measurement record, so in both, if we ask him whether he got a determinate record, he will say “yes.” If, as in this case, all summands share a property (in this case the property of our observer saying “yes” when asked if he got a determinate measurement record), then that property is determinate.
This is strange because he did not in fact get a determinate measurement record; he instead recorded a superposition of two outcomes. After our observer measures an x-spin up electron’s z-spin, he will not have determinately gotten either “*z-*spin up” or “*z-*spin down” as his record. Rather he will have determinately gotten “*z-*spin up or *z-*spin down,” since his state will have become correlated with the state of the electron due to his interaction with it through measurement. Everett believed he had explained determinate experience through the use of relative states (Everett 1957b: 146; Everett 1973: 63, 68–70, 98–9). That he did not succeed is largely agreed upon in the community of Everettians.”
Many attempts have been made to fix the problem, notably decoherence based approaches.
The expected mechanism of decoherence is interaction with a larger environment… which is assumed to already be in a set of decoherent branches on a classical basis. But why? At this point , it becomes a cosmological problem. You can’t write a WF of the universe without some cosmological assumptions about the initial state, and so on, so whether it looks many-worldish or not depends on the assumptions.
It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?
What would you expect to see if you were a superposition?
If I were in a coherent superposition, I would expect to see non classical stuff. Entanglement alone is not enough to explain my sharp-valued, quasi classical observations.
It isn’t just a matter of definition, because I don’t perceive non classical stuff, so I lack motivation to define “I” in way that mispredicts that I do. You don’t get to arbitrarily relabel things if you are in the truth seeking business.
The objection isn’t to using “I” or “the observer” to label a superposed bundle of sub-persons , each of which individually is unaware of the others and has normal, classical style experience, because that doesn’t mispredict my experience. There problem is that “super posed bundle of persons , each of which is unaware of the others and has normal, classical style experience” is what you get from a decoherent superposition, and I am specifically talking about coherent superposition. (“In the Everett theory, everything that starts in a coherent superposition, stays in one.”) Decoherence was introduced precisely to solve the problem with Everett’s RSI.
Let’s say you have some unitary transformation M=U⊕S. If you were to apply this to a coherent superposition (|0>+|1>)⊗v, it seems like it would pretty much always make you end up with a decoherent superposition. So it doesn’t seem like there’s anything left to explain.
I’m not trying to say all forms of MW are hopeless. I am saying
there is more than one form
there are trade offs between simplicity and correctness—there’s no simple and adequate MWI.
Decoherence isn’t simple—you can’t find it by naively looking at the SWE, and it took three or four decades for physicists to notice.
It also doesnt’t unequivocally support MW -- when we observe decoherence, we observe it one universe at a time, and maybe in the one and only universe.
“Decoherence does half the job of solving the measurement problem. In short, it tells you that you will not in practice be able to observe that Schroodinger’s cat is in a superposition, because the phase between the two parts of the superposition would not be sufficiently stable. But the concept of decoherence does not, on its own, yield an answer to the question “how come the experimental outcome turns out to be one of A or B, not both A and B carried forward together into the future?”
The half-job that decoherence succeeds in doing is to elucidate the physical process whereby a preferred basis or pointer basis is established. As you say in the question, any given quantum state can be expressed as a superposition in some basis, but this ignores the dynamical situation that physical systems are in. In practice, when interactions with large systems are involved, states in one basis will stay still, states in another basis will evolve VERY rapidly, especially in the phase factors that appear as off-diagonal elements of density matrices. The pointer basis is the one where, if the system is in a state in that basis, then it does not have this very fast evolution.
But as I say, this observation does not in and of itself solve the measurement problem in full; it merely adds some relevant information. It is the next stage where the measurement problem really lies, and where people disagree. Some people think the pointer basis is telling us about different parts of a ‘multiverse’ which all should be regarded as ‘real’. Other people think the pointer basis is telling us when and where it is legitimate to assert ‘one thing and not both things happen’.
That’s it. That’s my answer to your question.
But I can’t resist the lure, the sweet call of the siren, “so tell us: what is really going on in quantum measurement?” So (briefly!) here goes.
I think one cannot get a good insight into the interpretation of QM until one has got as far as the fully relativistic treatment and therefore field theory. Until you get that far you find yourself trying to interpret the ‘state’ of a system; but you need to get into another mindset, in which you take an interest in events, and how one event influences another. Field theory naturally invites one to a kind of ‘input-output’ way of thinking, where the mathematical apparatus is not trying to say everything at once, but is a way of allowing one to ask and find answers to well-posed questions. There is a distinction between maths and physical stuff. The physical things evolve from one state to another; the mathematical apparatus tells us the probabilities of the outcomes we put to it once we have specified what is the system and what is its environment. Every system has an environment and quantum physics is a language which only makes sense in the context of an environment.
In the latter approach (which I think is on the right track) the concept of ‘wavefunction of the whole universe’ is as empty of meaning as the concept of ‘the velocity of the whole universe’. The effort to describe the parts of such a ‘universal wavefunction’ is a bit like describing the components of the velocity of the whole universe. In saying this I have gone beyond your question, but I hope in a useful way.”
“Despite how tidy the decoherence story seems, there are some peo-ple for whom it remains unsatisfying. One reason is that the deco-herence story had to bring in a lot of assumptions seemingly extra-neous to quantum mechanics itself: about the behavior of typicalphysical systems, the classicality of the brain, and even the nature ofsubjective experience. A second reason is that the decoherence storynever did answer our question about the probability you see the dotchange color – instead the story simply tried to convince us the ques-tion was meaningless.” Quantum Computing since Democritus, 2nd Ed, P. 169.
We should not expect any bases not containing conscious observers to be observed, but that’s not the same as saying they’re not equally valid bases. See Everett and Structure, esp. section 7.
But |cat alive> + |cat dead> is a natural basis because that’s the basis in which the interaction occurs. No mystery there; you can’t perceive something without interacting with it, and an interaction is likely to have some sort of privileged basis.
Regarding basis as an observers own choice of “co-ordinate grid”, and regarding observing (or instrument) as having a natural basis , is a simple and powerful theory of basis. Since an observer’s natural basis is the one that minimises superpositions, the fact that observers make quasi-classical observations drops out naturally, without any cosmological assumptions. But, since there is no longer a need for a global and objective basis, a basis that is a feature of the universe, there is no longer a possibility of many worlds as an objective feature of the universe: since an objective basis is needed to objectively define a division into worlds, there such a division is no longer possible, and splitting is an observer-dependent phenomenon.
We’d still expect strongly interacting systems e.g. the earth (and really, the solar system?) to have an objective splitting. But it seems correct to say that I basically don’t know how far that extends.
Why? If you could prove that large environments must cause decoherence into n>1 branches you would have solved the measurement problem as it is currently understood.
This is just chaos theory, isn’t it? If one person sees that Schrodinger’s cat is dead, then they’re going to change their future behavior, which changes the behavior of everyone they interact with, and this then butterflies up to entangle the entire earth in the same superposition.
You’re saying that if you have decoherent splitting of an observer, that leads to more decoherent splitting. But where does the initial decoherent splitting come from?
The observer is highly sensitive to differences along a specific basis, and therefore changes a lot in response to that basis. Due to chaos, this then leads to everything else on earth getting entangled with the observer in that same basis, implying earth-wide decoherence.
What does highly sensitive mean? In classical physics, an observer can produce an energy output much greater than the energy input of the observation. ,but no splitting is implied. In bare Everettian theory, an observer becomes entangled with the coherent superposition they are observing, and goes into a coherent superposition themself ..so no decoherentsplitting is implied. You still haven’t said where and the initial decoherent splitting occurs.
The multiverse interpretation takes the wavefunction literally and says that since the math describes a multiverse, there is a multiverse.
YMMV about how literally you take the math. I’ve come to have a technical objection to it such that I’d be inclined to say that the multiverse theory is wrong, but also it is very technical and I think a substantial fraction of multiverse theorists would say “yeah that’s what I meant” or “I suppose that’s plausible too”.
But “take the math literally” sure seems like good reason/evidence.
And when it comes to pilot wave theory, its math also postulates a wavefunction, so if you take the math literally for pilot wave theory, you get the Everettian multiverse; you just additionally declare one of the branches Real in a vague sense.
What’s the technical objection you have to it?
Gonna post a top-level post about it once it’s made it through editing, but basically the wavefunction is a way to embed a quantum system in a deterministic system, very closely analogous to how a probability function allows you to embed a stochastic system into a deterministic system. So just like how taking the math literally for QM means believing that you live in a multiverse, taking the math literally for probability also means believing that you live in a multiverse. But it seems philosophically coherent for me to believe that we live in a truly stochastic universe rather than just a deterministic probability multiverse, so it also feels like it should be philosophically coherent that we live in a truly quantum universe.
What do you mean by “a truly quantum universe”?
Before I answer that question: do you know what I mean by a truly stochastic universe? If so, how would you explain the concept of true ontologically fundamental stochasticity to a mind that does not know what it means?
I think by “truly stochastic” you mean that multiple future outcomes are possible, rather than one inevitable outcome. You don’t merely mean “it’s absolutely physically impossible to take the necessary measurements to predict things” or “a coin flip is pretty much random for all intents & purposes”. That’s my guess.
Kind of, because “multiple future outcomes are possible, rather than one inevitable outcome” could sort of be said to apply to both true stochasticity and true quantum mechanics. With true stochasticity, it has to evolve by a diffusion-like process with no destructive interference, whereas for true quantum mechanics, it has to evolve by a unitary-like process with no information loss.
So to a mind that can comprehend probability distributions, but intuitively thinks they always describe hidden variables or frequencies or whatever, how does one express true stochasticity, the notion where a probability distribution of future outcomes are possible (even if one knew all the information that currently exists), but only one of them happens?
I’ve been arguing before that true randomness cannot be formalized, and therefore Kolmogorov Complexity(stochastic universe) = ∞. But ofc then the out-of-model uncertainty dominates the calculation, mb one needs a measure with a randomness primitive. (If someone thinks they can explain randomness in terms of other concepts, I also wanna see it.)
The math doesn’t describe a multiverse, in the sense that if you solve the Schrödinger equation for the universe, you get some structure with clearly separated decoherent branches, every time. You need additional assumptions, which have their own complexity cost.
In fact, MWI is usually argued from models of a few particles. These can show coherent superposition, but to spin an MW theory worthy of the name out of that, you need superpositions that can be maintained at large scale, and also decohere into non interacting branches , preferably by a mechanism that can be found entirely in the standard formalism.
I’m confused about what you’re saying. In particular while I know what “decoherence” means, it sounds like you are talking about some special formal thing when you say “decoherent branches”.
Let’s consider the case of Schrodinger’s cat. Surely the math itself says that when you open the box, you end up in a superposition of |see the cat alive> + |see the cat dead>.
Or from a comp sci PoV, I imagine having some initial bit sequence, |0101010001100010>, and then applying a Hadamard gate to end up with a superposition (sqrt(1/2) |0> + sqrt(1/2) |1>) (x) |101010001100010>. Next I imagine a bunch of CNOTs that mix together this bit in superposition with the other bits, making the superpositions very distant from each other and therefore unlikely to interact.
What are you saying goes wrong in these pictures?
In a classical basis. But you could rewrite the superposition in other bases that we dont observe. That’s one problem.
As Penrose writes (Road to Reality 29.8) “Why do we not permit these superposed perception states? Until we know exactly what it is about a quantum state that allows it to be considered as a ‘perception’, and consequently see that such superpositions are ‘not allowed’, we have really got nowhere in explaining why the real world of our experiences cannot involve superpositions of live and dead cats.” Penrose gives the example of 2|psi>1⁄4 {jperceiving live cati þ jperceiving dead cati} {jlive cati þ j dead cati} þ {jperceiving live cati jperceiving dead cati} {jlive cati jdead cati} as an example of a surreal, non-classical superposition.
And/or, we have no reason to believe, given only the formalism itself , that the two versions of the observer will be unaware of each other, and able to report unambiguously on their individual observations. That’s the point of the de/coherence distinction. In the Everett theory, everything that starts in a coherent superposition, stays in one.
“According to Everett’s pure wave mechanics, when our observer makes a measurement of the electron he does not cause a collapse, but instead becomes correlated with the electron. What this means is that where once we had a system that consisted of just an electron, there is now a system that consists of the electron and the observer. The mathematical equation that describes the state of the new system has one summand in which the electron is z-spin up and the observer measured “z-spin up” and another in which the electron is z-spin down and the observer measured “z-spin down.” In both summands our observer got a determinate measurement record, so in both, if we ask him whether he got a determinate record, he will say “yes.” If, as in this case, all summands share a property (in this case the property of our observer saying “yes” when asked if he got a determinate measurement record), then that property is determinate.
This is strange because he did not in fact get a determinate measurement record; he instead recorded a superposition of two outcomes. After our observer measures an x-spin up electron’s z-spin, he will not have determinately gotten either “*z-*spin up” or “*z-*spin down” as his record. Rather he will have determinately gotten “*z-*spin up or *z-*spin down,” since his state will have become correlated with the state of the electron due to his interaction with it through measurement. Everett believed he had explained determinate experience through the use of relative states (Everett 1957b: 146; Everett 1973: 63, 68–70, 98–9). That he did not succeed is largely agreed upon in the community of Everettians.”
(https://iep.utm.edu/everett/#H5)
Many attempts have been made to fix the problem, notably decoherence based approaches.
The expected mechanism of decoherence is interaction with a larger environment… which is assumed to already be in a set of decoherent branches on a classical basis. But why? At this point , it becomes a cosmological problem. You can’t write a WF of the universe without some cosmological assumptions about the initial state, and so on, so whether it looks many-worldish or not depends on the assumptions.
It’s just a matter of definition. We say that “you” and “I” are the things that are entangled with a specific observed state. Different versions of you are entangled with different observations. Nothing is stopping you from defining a new kind of person which is a superposition of different entanglements. The reason it doesn’t “look” that way from your perspective is because of entanglement and the law of the excluded middle. What would you expect to see if you were a superposition?
If I were in a coherent superposition, I would expect to see non classical stuff. Entanglement alone is not enough to explain my sharp-valued, quasi classical observations.
It isn’t just a matter of definition, because I don’t perceive non classical stuff, so I lack motivation to define “I” in way that mispredicts that I do. You don’t get to arbitrarily relabel things if you are in the truth seeking business.
The objection isn’t to using “I” or “the observer” to label a superposed bundle of sub-persons , each of which individually is unaware of the others and has normal, classical style experience, because that doesn’t mispredict my experience. There problem is that “super posed bundle of persons , each of which is unaware of the others and has normal, classical style experience” is what you get from a decoherent superposition, and I am specifically talking about coherent superposition. (“In the Everett theory, everything that starts in a coherent superposition, stays in one.”) Decoherence was introduced precisely to solve the problem with Everett’s RSI.
Let’s say you have some unitary transformation M=U⊕S. If you were to apply this to a coherent superposition (|0>+|1>)⊗v, it seems like it would pretty much always make you end up with a decoherent superposition. So it doesn’t seem like there’s anything left to explain.
I’m not trying to say all forms of MW are hopeless. I am saying
there is more than one form
there are trade offs between simplicity and correctness—there’s no simple and adequate MWI.
Decoherence isn’t simple—you can’t find it by naively looking at the SWE, and it took three or four decades for physicists to notice.
It also doesnt’t unequivocally support MW -- when we observe decoherence, we observe it one universe at a time, and maybe in the one and only universe.
“Decoherence does half the job of solving the measurement problem. In short, it tells you that you will not in practice be able to observe that Schroodinger’s cat is in a superposition, because the phase between the two parts of the superposition would not be sufficiently stable. But the concept of decoherence does not, on its own, yield an answer to the question “how come the experimental outcome turns out to be one of A or B, not both A and B carried forward together into the future?”
The half-job that decoherence succeeds in doing is to elucidate the physical process whereby a preferred basis or pointer basis is established. As you say in the question, any given quantum state can be expressed as a superposition in some basis, but this ignores the dynamical situation that physical systems are in. In practice, when interactions with large systems are involved, states in one basis will stay still, states in another basis will evolve VERY rapidly, especially in the phase factors that appear as off-diagonal elements of density matrices. The pointer basis is the one where, if the system is in a state in that basis, then it does not have this very fast evolution.
But as I say, this observation does not in and of itself solve the measurement problem in full; it merely adds some relevant information. It is the next stage where the measurement problem really lies, and where people disagree. Some people think the pointer basis is telling us about different parts of a ‘multiverse’ which all should be regarded as ‘real’. Other people think the pointer basis is telling us when and where it is legitimate to assert ‘one thing and not both things happen’.
That’s it. That’s my answer to your question.
But I can’t resist the lure, the sweet call of the siren, “so tell us: what is really going on in quantum measurement?” So (briefly!) here goes.
I think one cannot get a good insight into the interpretation of QM until one has got as far as the fully relativistic treatment and therefore field theory. Until you get that far you find yourself trying to interpret the ‘state’ of a system; but you need to get into another mindset, in which you take an interest in events, and how one event influences another. Field theory naturally invites one to a kind of ‘input-output’ way of thinking, where the mathematical apparatus is not trying to say everything at once, but is a way of allowing one to ask and find answers to well-posed questions. There is a distinction between maths and physical stuff. The physical things evolve from one state to another; the mathematical apparatus tells us the probabilities of the outcomes we put to it once we have specified what is the system and what is its environment. Every system has an environment and quantum physics is a language which only makes sense in the context of an environment.
In the latter approach (which I think is on the right track) the concept of ‘wavefunction of the whole universe’ is as empty of meaning as the concept of ‘the velocity of the whole universe’. The effort to describe the parts of such a ‘universal wavefunction’ is a bit like describing the components of the velocity of the whole universe. In saying this I have gone beyond your question, but I hope in a useful way.”
https://physics.stackexchange.com/questions/256874/simple-question-about-decoherence
ETA:
“Despite how tidy the decoherence story seems, there are some peo-ple for whom it remains unsatisfying. One reason is that the deco-herence story had to bring in a lot of assumptions seemingly extra-neous to quantum mechanics itself: about the behavior of typicalphysical systems, the classicality of the brain, and even the nature ofsubjective experience. A second reason is that the decoherence storynever did answer our question about the probability you see the dotchange color – instead the story simply tried to convince us the ques-tion was meaningless.” Quantum Computing since Democritus, 2nd Ed, P. 169.
We should not expect any bases not containing conscious observers to be observed, but that’s not the same as saying they’re not equally valid bases. See Everett and Structure, esp. section 7.
We don’t have to regard basis as objective, ITFP.
But |cat alive> + |cat dead> is a natural basis because that’s the basis in which the interaction occurs. No mystery there; you can’t perceive something without interacting with it, and an interaction is likely to have some sort of privileged basis.
Regarding basis as an observers own choice of “co-ordinate grid”, and regarding observing (or instrument) as having a natural basis , is a simple and powerful theory of basis. Since an observer’s natural basis is the one that minimises superpositions, the fact that observers make quasi-classical observations drops out naturally, without any cosmological assumptions. But, since there is no longer a need for a global and objective basis, a basis that is a feature of the universe, there is no longer a possibility of many worlds as an objective feature of the universe: since an objective basis is needed to objectively define a division into worlds, there such a division is no longer possible, and splitting is an observer-dependent phenomenon.
We’d still expect strongly interacting systems e.g. the earth (and really, the solar system?) to have an objective splitting. But it seems correct to say that I basically don’t know how far that extends.
Why? If you could prove that large environments must cause decoherence into n>1 branches you would have solved the measurement problem as it is currently understood.
This is just chaos theory, isn’t it? If one person sees that Schrodinger’s cat is dead, then they’re going to change their future behavior, which changes the behavior of everyone they interact with, and this then butterflies up to entangle the entire earth in the same superposition.
You’re saying that if you have decoherent splitting of an observer, that leads to more decoherent splitting. But where does the initial decoherent splitting come from?
The observer is highly sensitive to differences along a specific basis, and therefore changes a lot in response to that basis. Due to chaos, this then leads to everything else on earth getting entangled with the observer in that same basis, implying earth-wide decoherence.
What does highly sensitive mean? In classical physics, an observer can produce an energy output much greater than the energy input of the observation. ,but no splitting is implied. In bare Everettian theory, an observer becomes entangled with the coherent superposition they are observing, and goes into a coherent superposition themself ..so no decoherentsplitting is implied. You still haven’t said where and the initial decoherent splitting occurs.
Hi? Edit: the parent comment originally just had a single word saying “Test”