I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?
I’m not sure if this will be satisfying to you but I like to think about it like this:
Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.
This isn’t a derivation but it makes the mathematical structure of QM somewhat plausible to me.
Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation. But it seems like they are built in from the start. Which is fine, it’s just deeply weird. So it’s kind of hard to say whether the Everett interpretation is more elegant. I mean in the Copenhagen interpretation, you say “measuring can only yield eigenvectors” and the Everett interpretation, you say “measuring can only yield eigenvectors and all measurements are done so the whole thing is still unitary”. But in the end even the Everett interpretation distinguishes “observers” somehow, I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the same system from the other point of view...
The “symmetric matrices<-> real eigenvectors” is of course important, this is essentially just the spectral theorem which tells us that real linear combinations of orthogonal projections are symmetric matrices (and vice versa).
Nowadays matrices are seen as “simple non-commutative objects”. I’m not sure if this was true when QM was being developed. But then again, I’m not really sure how linear QM “really” is. I mean all of this takes place on vectors with norm 1 (and the results are invariant under change of phase), and once we quotient out the norm, most of the linear structure is gone. I’m not sure what the correct way to think about the phase is. On one hand, it seems like a kind of “fake” unobservable variable and it should be permissible to quotient it out somehow. On the other hand, the complex-ness of the Schrödinger equation seems really important. But is this complexness a red herring? What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation.
This is a bit of a tangent but decoherence isn’t exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.
I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the same system from the other point of view...
In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren’t valid any more. I’m not sure if we are on the same page regarding decoherence, though (see my other reply to your post).
What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
You might be interested in Lucien Hardy’s attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
Isn’t the whole point of the Everett interpretation that there is no decoherence? We have a Hilbert space for the system, and a Hilbert space for the observer, and a unitary evolution on the tensor product space of the system. With these postulates (and a few more), we can start with a pure state and end up with some mixed tensor in the product space, which we then interpret as being “multiple observers”, right? I mean this is how I read your paper.
We are surely not on the same page regarding decoherence, as I know almost nothing about it :)
The arxiv-link looks interesting, I should have a look at it.
Yes, the coherence-based approach (Everett’s original paper, early MWI) is quite different to the decoherence-based approach (Dieter Zeh, post 1970).
Deutsch uses the coherence based approach, while most other many worlders use the decoherence based approach.
He absolutely does establish that quantum computing is superior to classical computing, that underlying reality is not classical, and that the superiority of quantum computing requires some extra structure to reality. What the coherence based approach does not establish is whether the extra structure adds up to something that could be called “alternate worlds” or parallel universes , in the sense familiar from science fiction.
In the coherence based approach, Worlds” are coherent superpositions.That means they in exist at small scales, they can continue to interact with each other, after, “splitting” , and they can be erased. These coherent superposed states are the kind of “world” we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.
I’m not sure if this will be satisfying to you but I like to think about it like this:
Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow. They also need to be real. Both conditions are satisfied by the eigenvalues of self-adjoint matrices.
Experiments show that if we immediately repeat a measurement, we get the same outcome again. So if eigenvalues represent measurement outcomes the state of the system after the measurement must be related to them somehow. The eigenvectors of the matrix representing this state is a simple realization of this.
This isn’t a derivation but it makes the mathematical structure of QM somewhat plausible to me.
Right, but (before reading your post) I had assumed that the eigenvectors somehow “popped out” of the Everett interpretation. But it seems like they are built in from the start. Which is fine, it’s just deeply weird. So it’s kind of hard to say whether the Everett interpretation is more elegant. I mean in the Copenhagen interpretation, you say “measuring can only yield eigenvectors” and the Everett interpretation, you say “measuring can only yield eigenvectors and all measurements are done so the whole thing is still unitary”. But in the end even the Everett interpretation distinguishes “observers” somehow, I mean in the setup you describe there isn’t any reason why we can’t call the “state space” the observer space and the observer “the system being studied” and then write down the same system from the other point of view...
The “symmetric matrices<-> real eigenvectors” is of course important, this is essentially just the spectral theorem which tells us that real linear combinations of orthogonal projections are symmetric matrices (and vice versa).
Nowadays matrices are seen as “simple non-commutative objects”. I’m not sure if this was true when QM was being developed. But then again, I’m not really sure how linear QM “really” is. I mean all of this takes place on vectors with norm 1 (and the results are invariant under change of phase), and once we quotient out the norm, most of the linear structure is gone. I’m not sure what the correct way to think about the phase is. On one hand, it seems like a kind of “fake” unobservable variable and it should be permissible to quotient it out somehow. On the other hand, the complex-ness of the Schrödinger equation seems really important. But is this complexness a red herring? What goes wrong if we just take our “base states” as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?
This is a bit of a tangent but decoherence isn’t exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.
In the derivations of decoherence you make certain approximations which loosely speaking depend on the environment being big relative to the quantum system. If you change the roles these approximations aren’t valid any more. I’m not sure if we are on the same page regarding decoherence, though (see my other reply to your post).
You might be interested in Lucien Hardy’s attempt to find a more intuitive set of axioms for QM compared to the abstractness of the usual presentation: https://arxiv.org/abs/quant-ph/0101012
Isn’t the whole point of the Everett interpretation that there is no decoherence? We have a Hilbert space for the system, and a Hilbert space for the observer, and a unitary evolution on the tensor product space of the system. With these postulates (and a few more), we can start with a pure state and end up with some mixed tensor in the product space, which we then interpret as being “multiple observers”, right? I mean this is how I read your paper.
We are surely not on the same page regarding decoherence, as I know almost nothing about it :)
The arxiv-link looks interesting, I should have a look at it.
Yes, the coherence-based approach (Everett’s original paper, early MWI) is quite different to the decoherence-based approach (Dieter Zeh, post 1970).
Deutsch uses the coherence based approach, while most other many worlders use the decoherence based approach.
He absolutely does establish that quantum computing is superior to classical computing, that underlying reality is not classical, and that the superiority of quantum computing requires some extra structure to reality. What the coherence based approach does not establish is whether the extra structure adds up to something that could be called “alternate worlds” or parallel universes , in the sense familiar from science fiction.
In the coherence based approach, Worlds” are coherent superpositions.That means they in exist at small scales, they can continue to interact with each other, after, “splitting” , and they can be erased. These coherent superposed states are the kind of “world” we have direct evidence for, although they seem to lack many of the properties requited for a fully fledged many worlds theory, hence the scare quotes.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.