Well, I liked the paper, but I’m not knowledgeable enough to judge its true merits. It deals heavily with Bayesian-related questions, somewhat in Jayne’s style, so I thought it could be relevant to this forum.
At least one of the authors is a well-known theoretical physicist with an awe-inspiring Hirsch factor, so presumably the paper would not be trivially worthless. I think it merits a more careful read.
Someone can build a career on successfully and ingeniously applying QM, and still have personal views about why QM works, that are wrong or naive.
Rather than just be annoyed with the paper, I want to identify its governing ideas. Basically, this is a research program which aims to show that quantum mechanics doesn’t imply anything strikingly new or strange about reality. The core claim is that quantum mechanics is the natural formalism for describing any phenomenon which exhibits uncertainty but which is still robustly reproducible.
In slightly more detail: First, there is no attempt to figure out hidden physical realities. The claim is that in any possible world where certain experimental results occur, QM will provide an apt and optimal description of events, regardless of what the real causes are. Second, there is a determination to show that QM is somehow straightforward or even banal: ‘quantum theory is a “common sense” description of the vast class of experiments that belongs to category 3a.’ Third, the authors are inspired by Jaynes’s attempt to obtain QM from Bayes, and Frieden’s attempt to get physics from Fisher information, which they think they can justify for experiments that are “robustly” reproducible.
Having set out this agenda, what evidence do the authors provide? First, they describe something vaguely like an EPR experiment, make various assumptions about how the outputs behave, and then show that these assumptions imply correlations like those produced when a particular entangled state is used as input in a real EPR experiment. They also add that with different starting assumptions, they can obtain outputs like those of a different entangled state.
Then, they have a similarly abstracted description of a Stern-Gerlach experiment, and here they claim that they get the Born rule as a result of their assumptions. Finally, they consider a moving particle under repeated observation, and say that they can get the Schrodinger equation by assuming that the outcomes resemble Newtonian mechanics on average.
Their choice of case studies, and the assumptions they allow themselves to use, both seem rather haphazard to me. They make many appeals to symmetry, e.g. one of the assumptions in their EPR case study is that the experiment will behave the same regardless of orientation. Or in deriving the Schrodinger equation, they assume translational invariance. These are standard hypotheses in the ordinary approach to physics too, so it’s not surprising that they should yield something like ordinary physics here, too… On the other hand, they only derive the Born rule in the special case of Stern-Gerlach, so they have probably done something tricky there.
In general, it seems that they decided in advance that QM would be derived from the assumption of uncertain but reproducible phenomena, and the application of Bayes-like reasoning, and nothing else… but then for each of their various case studies, they then did allow the use of whatever extra assumptions were necessary, to arrive at the desired conclusion.
So I do not regard the paper’s philosophy as having merit. But the real demonstration of this would require engaging with each of their case studies in turn, and showing that special extra assumptions were indeed used. It would also be useful to criticize their definition of ‘category 3a’ experiments, by showing that there are experiments in that category which manifestly do not exhibit quantum-like behavior… I suspect that the properly corrected version of their paper would be something like “Quantum theory as the most robust description of reproducible experiments that behave like quantum theory”.
Well, I liked the paper, but I’m not knowledgeable enough to judge its true merits. It deals heavily with Bayesian-related questions, somewhat in Jayne’s style, so I thought it could be relevant to this forum.
At least one of the authors is a well-known theoretical physicist with an awe-inspiring Hirsch factor, so presumably the paper would not be trivially worthless. I think it merits a more careful read.
Someone can build a career on successfully and ingeniously applying QM, and still have personal views about why QM works, that are wrong or naive.
Rather than just be annoyed with the paper, I want to identify its governing ideas. Basically, this is a research program which aims to show that quantum mechanics doesn’t imply anything strikingly new or strange about reality. The core claim is that quantum mechanics is the natural formalism for describing any phenomenon which exhibits uncertainty but which is still robustly reproducible.
In slightly more detail: First, there is no attempt to figure out hidden physical realities. The claim is that in any possible world where certain experimental results occur, QM will provide an apt and optimal description of events, regardless of what the real causes are. Second, there is a determination to show that QM is somehow straightforward or even banal: ‘quantum theory is a “common sense” description of the vast class of experiments that belongs to category 3a.’ Third, the authors are inspired by Jaynes’s attempt to obtain QM from Bayes, and Frieden’s attempt to get physics from Fisher information, which they think they can justify for experiments that are “robustly” reproducible.
Having set out this agenda, what evidence do the authors provide? First, they describe something vaguely like an EPR experiment, make various assumptions about how the outputs behave, and then show that these assumptions imply correlations like those produced when a particular entangled state is used as input in a real EPR experiment. They also add that with different starting assumptions, they can obtain outputs like those of a different entangled state.
Then, they have a similarly abstracted description of a Stern-Gerlach experiment, and here they claim that they get the Born rule as a result of their assumptions. Finally, they consider a moving particle under repeated observation, and say that they can get the Schrodinger equation by assuming that the outcomes resemble Newtonian mechanics on average.
Their choice of case studies, and the assumptions they allow themselves to use, both seem rather haphazard to me. They make many appeals to symmetry, e.g. one of the assumptions in their EPR case study is that the experiment will behave the same regardless of orientation. Or in deriving the Schrodinger equation, they assume translational invariance. These are standard hypotheses in the ordinary approach to physics too, so it’s not surprising that they should yield something like ordinary physics here, too… On the other hand, they only derive the Born rule in the special case of Stern-Gerlach, so they have probably done something tricky there.
In general, it seems that they decided in advance that QM would be derived from the assumption of uncertain but reproducible phenomena, and the application of Bayes-like reasoning, and nothing else… but then for each of their various case studies, they then did allow the use of whatever extra assumptions were necessary, to arrive at the desired conclusion.
So I do not regard the paper’s philosophy as having merit. But the real demonstration of this would require engaging with each of their case studies in turn, and showing that special extra assumptions were indeed used. It would also be useful to criticize their definition of ‘category 3a’ experiments, by showing that there are experiments in that category which manifestly do not exhibit quantum-like behavior… I suspect that the properly corrected version of their paper would be something like “Quantum theory as the most robust description of reproducible experiments that behave like quantum theory”.