What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can’t use QM arguments here.
With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood—it’s just empirically correct that the result of any measurement is a definite state. You don’t need any theory, it’s just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/”map side” in how QD and Copenhagen QM treat definite states—all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.]
I don’t think I agree with (or don’t understand what you mean by) “including the superposition of dead and alive leads to actual physical consequences”—bomb-testing result is consequence of standard QM, so it doesn’t prove anything “new.”
All I really wanted to do was to point out an example of “interaction free” measurement, which throws a brick into the quantum darwinism approach. There can never be an “objective consensus” about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can’t. The only way to get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment—this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment? If it isn’t arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.
I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don’t really see those two as fundamentally different.
It’s hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way… The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility… Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.
There can never be an “objective consensus” about what happens in the bomb cavity,
Ah, nice catch—I see your point now, quite interesting. Now I’m curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work—here is a first attempt, let me know what you think (honestly, I’m just using decoherence here, nothing else):
If the bomb is ‘live’, then the two paths will quickly entangle many degree of freedom of the environment, and so you can’t get reproducible records that involve interference between the two branches. If the bomb is “dud”, then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable—so ultimately feel like a personal preference of what argumentation you find convincing.
Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it?
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it’s a matter of anyone repeating the experiment getting the same outcome—whether this outcome is “ball rolls down” or “ball rolls down 20% of the time”. I’m trying to see if we can say something in cases where no outcome is quite reproducible—probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than “I don’t know, so it’s 50-50”, but where that’s the only truly reproducible statement.
With this I just wanted to point out that I was not making any argument that relies on a particular interpretation of QM to work up to interaction-free measurements. I wanted to make it clear that I was not arguing anything about a collapse mechanism/what happens under the hood—it’s just empirically correct that the result of any measurement is a definite state. You don’t need any theory, it’s just brute empirics, all territory. [Tangentially, but still true, there is no distinction even theoretically/”map side” in how QD and Copenhagen QM treat definite states—all the differences come before this in the postulation of pointer states, collapse mechanism, etc, but QD still completely agrees with the canonical notion of a definite state.]
All I really wanted to do was to point out an example of “interaction free” measurement, which throws a brick into the quantum darwinism approach. There can never be an “objective consensus” about what happens in the bomb cavity, because any sea of photons/electrons/whatever present in the cavity will trip the bomb. The point of mentioning the Zeilinger experiment was to say that this is an empirical result, so QD has to be able to explain it, and it can’t. The only way to get Elitzur-Vaidman from QD is to postulate two different splits of system and environment during the experiment—this is a concrete version of the criticism laid out in a paper by Ruth Kastner. It is a plain physical fact that you can have interaction-free measurement, and QD struggles with explaining this since it has to perform an ontological split mid experiment; if the ontological split is arbitrary, why do you need to perform a specific special split reproduce the results of experiment? If it isn’t arbitrary, then you have to do some hard work to explain why changing your definition of system and environment for different experiments (and sometimes mid experiment) is justified.
It’s hard for me to see why you think they are not fundamentally different definitions of reproducibility. On an iteration by iteration basis, they clearly differ significantly; in the first case (reproducibility of a specific outcome), the ball must fall in the same way every time for it to count as evidence towards this kind of reproducibility, and a single instance of it not falling in the same way… The second case (reproducibility of a probability of an outcome over many realizations) immediately destroys the first kind of reproducibility… Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it? Maybe I am missing something blunt here.
Ah, nice catch—I see your point now, quite interesting. Now I’m curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work—here is a first attempt, let me know what you think (honestly, I’m just using decoherence here, nothing else):
If the bomb is ‘live’, then the two paths will quickly entangle many degree of freedom of the environment, and so you can’t get reproducible records that involve interference between the two branches. If the bomb is “dud”, then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable—so ultimately feel like a personal preference of what argumentation you find convincing.
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it’s a matter of anyone repeating the experiment getting the same outcome—whether this outcome is “ball rolls down” or “ball rolls down 20% of the time”. I’m trying to see if we can say something in cases where no outcome is quite reproducible—probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than “I don’t know, so it’s 50-50”, but where that’s the only truly reproducible statement.