So for the cat, a superposition of dead and alive will never be “objective” since it is not stable under interactions with photons – and so cannot be copied many times.
Ask yourself what is being copied many times. The very fact that the quantum cat is in a superposition of only alive and dead tells you something else that is only apparently “consensus objective”; everyone agrees that the only possible definite states associated with the system are that the quantum cat is alive, or dead, and nothing else. This just kicks the definition of “objective” back to the specification of the state vector. There is something objective about the fact that the cat only ever has the possibility of being recorded as alive or dead and nothing else. That list (alive, dead) is revealed by interactions/measurements/recording, but the fact that there is never anything else on the list despite all possible ways of interacting with the system remains. In fact, including the superposition of dead and alive leads to actual physical consequences (see Elitzur-Vaidman bomb, for example), and Zeilinger won a nobel prize in part for physical instantiation of this “interaction-free” measurement.
This seems to awoke the scientific method, where reproducibility of an experimental result is the core criterion for “objective truth”
But it isn’t. Firstly we are working with two different and incompatible senses of reproducibility. Reproducibility of the naive and classical kind, which is when Anne does an experiment and John does the same experiment but at a different time and a different place, and then they both record the same result, is nothing but evidence that the experiment is a) so big that quantum fluctuations are washed out and b) invariant under space and time translations. Physicists and chemists have long since dispensed with this naive criterion as constituting “objective truth”. The whole quantum mechanical picture is that the experiment is only ever reproducible in the aggregate, a completely different kind of reproducible, that’s to say that Anne and John will only agree on two things after conducting the experiment infinitely many times, and those things are 1) the types of possible outcomes of the experiment (i.e the list of definite states) 2) the probabilities with which these outcomes occur and both 1) and 2) take as a given that the experiment is invariant under space and time translations over a large number of iterations. This is why everybody is satisfied when the large hadron collider reports the existence of the Higgs particle—nobody cares that there is only one large hadron collider, because everyone assumes in QFT that the experimental setup is unaffected by spacetime translations anyway. Instead they only bother “reproducing” the results by independent experiments, i.e in principle different ways of measuring the same effect, like ATLAS or CMS which are literally detectors just tacked onto the LHC apparatus. In other words, working physicists (at least) have long moved on from the naive notion of reproducibility and are working with something much more constrained.
Thanks for your comments! I’m having a bit of trouble clearly seeing your core points here—so forgive me if I misinterpret, or address something that wasn’t core to your argument.
To the first part, I feel like we need to clearly separate QM itself (Copenhagen), different Quantum Foundation theories, and Quantum Darwinism specifically. What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can’t use QM arguments here. So QD says that (alive, dead) is the complete list because of consensus (i.e., in this view, there isn’t anything more fundamental than consensus).
I don’t think I agree with (or don’t understand what you mean by) “including the superposition of dead and alive leads to actual physical consequences”—bomb-testing result is consequence of standard QM, so it doesn’t prove anything “new.”
To the second part, I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don’t really see those two as fundamentally different. In either case, we think of objective truth (whether probabilistic or deterministic) as something derived from reproducible—so, for example, excluding Knightian uncertainty.
What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can’t use QM arguments here.
With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood—it’s just empirically correct that the result of any measurement is a definite state. You don’t need any theory, it’s just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/”map side” in how QD and Copenhagen QM treat definite states—all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.]
I don’t think I agree with (or don’t understand what you mean by) “including the superposition of dead and alive leads to actual physical consequences”—bomb-testing result is consequence of standard QM, so it doesn’t prove anything “new.”
All I really wanted to do was to point out an example of “interaction free” measurement, which throws a brick into the quantum darwinism approach. There can never be an “objective consensus” about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can’t. The only way to get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment—this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment? If it isn’t arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.
I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don’t really see those two as fundamentally different.
It’s hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way… The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility… Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.
There can never be an “objective consensus” about what happens in the bomb cavity,
Ah, nice catch—I see your point now, quite interesting. Now I’m curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work—here is a first attempt, let me know what you think (honestly, I’m just using decoherence here, nothing else):
If the bomb is ‘live’, then the two paths will quickly entangle many degree of freedom of the environment, and so you can’t get reproducible records that involve interference between the two branches. If the bomb is “dud”, then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable—so ultimately feel like a personal preference of what argumentation you find convincing.
Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it?
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it’s a matter of anyone repeating the experiment getting the same outcome—whether this outcome is “ball rolls down” or “ball rolls down 20% of the time”. I’m trying to see if we can say something in cases where no outcome is quite reproducible—probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than “I don’t know, so it’s 50-50”, but where that’s the only truly reproducible statement.
Ask yourself what is being copied many times. The very fact that the quantum cat is in a superposition of only alive and dead tells you something else that is only apparently “consensus objective”; everyone agrees that the only possible definite states associated with the system are that the quantum cat is alive, or dead, and nothing else. This just kicks the definition of “objective” back to the specification of the state vector. There is something objective about the fact that the cat only ever has the possibility of being recorded as alive or dead and nothing else. That list (alive, dead) is revealed by interactions/measurements/recording, but the fact that there is never anything else on the list despite all possible ways of interacting with the system remains. In fact, including the superposition of dead and alive leads to actual physical consequences (see Elitzur-Vaidman bomb, for example), and Zeilinger won a nobel prize in part for physical instantiation of this “interaction-free” measurement.
But it isn’t. Firstly we are working with two different and incompatible senses of reproducibility. Reproducibility of the naive and classical kind, which is when Anne does an experiment and John does the same experiment but at a different time and a different place, and then they both record the same result, is nothing but evidence that the experiment is a) so big that quantum fluctuations are washed out and b) invariant under space and time translations. Physicists and chemists have long since dispensed with this naive criterion as constituting “objective truth”. The whole quantum mechanical picture is that the experiment is only ever reproducible in the aggregate, a completely different kind of reproducible, that’s to say that Anne and John will only agree on two things after conducting the experiment infinitely many times, and those things are 1) the types of possible outcomes of the experiment (i.e the list of definite states) 2) the probabilities with which these outcomes occur and both 1) and 2) take as a given that the experiment is invariant under space and time translations over a large number of iterations. This is why everybody is satisfied when the large hadron collider reports the existence of the Higgs particle—nobody cares that there is only one large hadron collider, because everyone assumes in QFT that the experimental setup is unaffected by spacetime translations anyway. Instead they only bother “reproducing” the results by independent experiments, i.e in principle different ways of measuring the same effect, like ATLAS or CMS which are literally detectors just tacked onto the LHC apparatus. In other words, working physicists (at least) have long moved on from the naive notion of reproducibility and are working with something much more constrained.
Thanks for your comments! I’m having a bit of trouble clearly seeing your core points here—so forgive me if I misinterpret, or address something that wasn’t core to your argument.
To the first part, I feel like we need to clearly separate QM itself (Copenhagen), different Quantum Foundation theories, and Quantum Darwinism specifically. What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can’t use QM arguments here. So QD says that (alive, dead) is the complete list because of consensus (i.e., in this view, there isn’t anything more fundamental than consensus).
I don’t think I agree with (or don’t understand what you mean by) “including the superposition of dead and alive leads to actual physical consequences”—bomb-testing result is consequence of standard QM, so it doesn’t prove anything “new.”
To the second part, I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don’t really see those two as fundamentally different. In either case, we think of objective truth (whether probabilistic or deterministic) as something derived from reproducible—so, for example, excluding Knightian uncertainty.
With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood—it’s just empirically correct that the result of any measurement is a definite state. You don’t need any theory, it’s just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/”map side” in how QD and Copenhagen QM treat definite states—all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.]
All I really wanted to do was to point out an example of “interaction free” measurement, which throws a brick into the quantum darwinism approach. There can never be an “objective consensus” about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can’t. The only way to get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment—this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment? If it isn’t arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.
It’s hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way… The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility… Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.
Ah, nice catch—I see your point now, quite interesting. Now I’m curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work—here is a first attempt, let me know what you think (honestly, I’m just using decoherence here, nothing else):
If the bomb is ‘live’, then the two paths will quickly entangle many degree of freedom of the environment, and so you can’t get reproducible records that involve interference between the two branches. If the bomb is “dud”, then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable—so ultimately feel like a personal preference of what argumentation you find convincing.
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it’s a matter of anyone repeating the experiment getting the same outcome—whether this outcome is “ball rolls down” or “ball rolls down 20% of the time”. I’m trying to see if we can say something in cases where no outcome is quite reproducible—probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than “I don’t know, so it’s 50-50”, but where that’s the only truly reproducible statement.