I coincidentally read that paper today (confession: I am not a physicist yet, still a student), and I am really suspicious of his use of unitary transformations. A transformation is unitary if and only if it preserves the l^2-norm, which is precisely what the Born rule describes (i.e. that the l^2-norm is the correct norm on wavefunctions). I asked myself which step would break down if rather than the Born rule the actual probability was the amplitude to the power 4, and I haven’t found it yet (provided we also redefine ‘unitary’). But (hopefully) I’m just misunderstanding the problem..?
They address this in footnote 4: they’re just deriving that the amplitudes squared should be interpreted as probabilities using quantum mechanics as defined, which includes unitary evolution and all that.
You could try the same thing with a QM variant with different mathematical structure, although you might be interested to know that linear transformations that preserve l^p norm for p other than 2 are boring (generalized permutation matrices). So you wouldn’t be able to evolve your orthogonal environmental states into the right combinations of identical environments + coin flips. There also are other reasons why p = 2 is special. Scott Aaronson has written about this (and also linearity and the use of complex numbers) in the context of whether quantum mechanics is an island in theoryspace.
Going a bit deeper: it seems like all of the work is done by factoring out the environment. That is, they identify unitary transformations of the environment as producing epistemically equivalent states, but why shouldn’t non-unitary transformations also be epistemically equivalent, whether or not unitary evolution is what happens in quantum mechanics? They have to leave the environment states orthogonal since that’s assumed by decoherence, but why not (say) just multiply one of those environment states by an arbitrary number and derive any probability you want (i.e. why shouldn’t the observer be indifferent to the relative measure of environment branches, since the environment is supposed to be independent, and then why not absorb any coefficients you like into the environment part)?
The answer is that you can’t think of non-unitary transformations as acting independently on one part of a system, and that this is also part of the way quantum mechanics is specified. Given the mathematics of quantum mechanics, it only makes sense to talk about two parts of a wavefunction as independent under unitary transformations of the individual parts. See Appendix B of their companion paper, and think about what happens if you replace U_B with something non-unitary in equation B.4.
I coincidentally read that paper today (confession: I am not a physicist yet, still a student), and I am really suspicious of his use of unitary transformations. A transformation is unitary if and only if it preserves the l^2-norm, which is precisely what the Born rule describes (i.e. that the l^2-norm is the correct norm on wavefunctions). I asked myself which step would break down if rather than the Born rule the actual probability was the amplitude to the power 4, and I haven’t found it yet (provided we also redefine ‘unitary’). But (hopefully) I’m just misunderstanding the problem..?
They address this in footnote 4: they’re just deriving that the amplitudes squared should be interpreted as probabilities using quantum mechanics as defined, which includes unitary evolution and all that.
You could try the same thing with a QM variant with different mathematical structure, although you might be interested to know that linear transformations that preserve l^p norm for p other than 2 are boring (generalized permutation matrices). So you wouldn’t be able to evolve your orthogonal environmental states into the right combinations of identical environments + coin flips. There also are other reasons why p = 2 is special. Scott Aaronson has written about this (and also linearity and the use of complex numbers) in the context of whether quantum mechanics is an island in theoryspace.
Going a bit deeper: it seems like all of the work is done by factoring out the environment. That is, they identify unitary transformations of the environment as producing epistemically equivalent states, but why shouldn’t non-unitary transformations also be epistemically equivalent, whether or not unitary evolution is what happens in quantum mechanics? They have to leave the environment states orthogonal since that’s assumed by decoherence, but why not (say) just multiply one of those environment states by an arbitrary number and derive any probability you want (i.e. why shouldn’t the observer be indifferent to the relative measure of environment branches, since the environment is supposed to be independent, and then why not absorb any coefficients you like into the environment part)?
The answer is that you can’t think of non-unitary transformations as acting independently on one part of a system, and that this is also part of the way quantum mechanics is specified. Given the mathematics of quantum mechanics, it only makes sense to talk about two parts of a wavefunction as independent under unitary transformations of the individual parts. See Appendix B of their companion paper, and think about what happens if you replace U_B with something non-unitary in equation B.4.