Is there any place we can see actual data on one of the photon polarization experiments? Not statistics, but actual data? And a probabilistic analysis, a la Jaynes, of the data?
In theory, I’m fine with non local interactions, but I’m not yet convinced they are necessary. I don’t see anything about detector efficiency here, which I believe is key to the reality of what happens. It seems completely natural to me that if a photon had some directional local variable, that it would effect the likelihood of detection in a polarizer dependent on the direction of the polarizer.
Jaynes has reservations about Bell’s Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.
Jaynes has reservations about Bell’s Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.
If you’re going to use an authority heuristic, at some point you also have to apply the heuristic “what does pretty much everyone else think?”
My impression is that most people take for granted that Bell was correct, and consider it a done deal. Another impression is that “pretty much everyone else” mistakenly takes ontological randomness as a conceptual given on a macro level, and there has yet to be conclusive evidence (see detector efficiency) that ontological randomness operates on a micro level.
I’m not saying he is right. I’m saying that I haven’t seen any better probabilistic analysis of the issue than what I’ve seen from Jaynes, and the evidence so far doesn’t conclusively prove him wrong.
Well, maybe my complaint about authority is just be hindsight talking. This is because it’s not like entanglement has never again been part of scientific research—quantum computers are made of the stuff. Electrons are just not classical objects.
And I think that, if we treat the universe as based on causality (a la Judea Pearl), the hidden variable route ( P(A | B a b) = P(A | a b) ) really is the only relativistic one, if we avoid many worlds. There are three ways for events to be linked: direct causally linked (faster than light), both descendants of a node we know about (hidden variable), or both ancestors of a node we know about (faster than light).
Jaynes is misunderstanding the class of hidden-variable theories Bell’s theorem rules out: the point is that the hidden variables λ would determine the outcome of measurements, i.e. P(A|aλ) is 0 for certain values of λ and 1 for all other values, and likewise for P(B|bλ), in which case P(A|abλ) must equal P(A|aλ), P(B|Aabλ) must equal P(B|bλ), and eq. 14 does equal eq. 15. (I had noticed this mistake several years ago, but I didn’t know whom to tell about.)
Good catch! Jaynes does not seem to restrict the local hidden variables models to just the deterministic ones, but allows probabilistic ones, as well. This seems to defeat the purpose of introducing hidden variables to begin with. Or maybe I misunderstand what he means.
My recollection is that Jaynes deals with this point. He discusses in particular time varying lambda (or I’d say maybe space-time varying lambda). As a general proposition, I don’t know how you could ever rule out a hidden variable theory with time variation faster than your current ability to measure.
He has another paper, where he speculates about the future of quantum theory, and talks about phase versus carrier frequencies, and suggests that phase may be real and could determine the outcome of events.
The obvious way to get “random” detection probability deterministically would be the time varying dependency on the interaction of photon polarization, phase of the wavefront, and detector direction.
If you’d like to discuss this in more detail, I’d keep this thread alive for a while, as it’s an issue I’d like to clear up for myself.
(I’ll look up the paper when I have more time. EDIT—paper put in first post.)
Very cool paper. But I couldn’t understand the most important point. Can anyone help? When Jaynes says that (15) is the correct factorization instead of Bell’s (14), he gives up something, and I don’t understand what it is. What are the spooky conclusions that mainstream physicists wanted to avoid by working with (14) instead of (15)? I understand Jaynes’ point about Bell’s hidden assumptions (1) (bottom of page 12), and I agree with it. But I don’t understand what he says about hidden assumption (2).
We haven’t yet conclusively demonstrated non-locality, without making assumptions about detector behavior. (i.e., sophisticated adversarial detectors could replicate all data observed so far, even in a classical universe, by selectively dropping data points).
The state of affairs may change relatively soon, if we successfully design experiments that violate classical locality even given unreliable detectors. I’d bet that we will.
Is there any place we can see actual data on one of the photon polarization experiments? Not statistics, but actual data? And a probabilistic analysis, a la Jaynes, of the data?
In theory, I’m fine with non local interactions, but I’m not yet convinced they are necessary. I don’t see anything about detector efficiency here, which I believe is key to the reality of what happens. It seems completely natural to me that if a photon had some directional local variable, that it would effect the likelihood of detection in a polarizer dependent on the direction of the polarizer.
Jaynes has reservations about Bell’s Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.
Jaynes paper on EPR and Bell’s Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf
Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf
If you’re going to use an authority heuristic, at some point you also have to apply the heuristic “what does pretty much everyone else think?”
My impression is that most people take for granted that Bell was correct, and consider it a done deal. Another impression is that “pretty much everyone else” mistakenly takes ontological randomness as a conceptual given on a macro level, and there has yet to be conclusive evidence (see detector efficiency) that ontological randomness operates on a micro level.
I’m not saying he is right. I’m saying that I haven’t seen any better probabilistic analysis of the issue than what I’ve seen from Jaynes, and the evidence so far doesn’t conclusively prove him wrong.
Well, maybe my complaint about authority is just be hindsight talking. This is because it’s not like entanglement has never again been part of scientific research—quantum computers are made of the stuff. Electrons are just not classical objects.
And I think that, if we treat the universe as based on causality (a la Judea Pearl), the hidden variable route ( P(A | B a b) = P(A | a b) ) really is the only relativistic one, if we avoid many worlds. There are three ways for events to be linked: direct causally linked (faster than light), both descendants of a node we know about (hidden variable), or both ancestors of a node we know about (faster than light).
Jaynes is misunderstanding the class of hidden-variable theories Bell’s theorem rules out: the point is that the hidden variables λ would determine the outcome of measurements, i.e. P(A|aλ) is 0 for certain values of λ and 1 for all other values, and likewise for P(B|bλ), in which case P(A|abλ) must equal P(A|aλ), P(B|Aabλ) must equal P(B|bλ), and eq. 14 does equal eq. 15. (I had noticed this mistake several years ago, but I didn’t know whom to tell about.)
Good catch! Jaynes does not seem to restrict the local hidden variables models to just the deterministic ones, but allows probabilistic ones, as well. This seems to defeat the purpose of introducing hidden variables to begin with. Or maybe I misunderstand what he means.
My recollection is that Jaynes deals with this point. He discusses in particular time varying lambda (or I’d say maybe space-time varying lambda). As a general proposition, I don’t know how you could ever rule out a hidden variable theory with time variation faster than your current ability to measure.
He has another paper, where he speculates about the future of quantum theory, and talks about phase versus carrier frequencies, and suggests that phase may be real and could determine the outcome of events.
The obvious way to get “random” detection probability deterministically would be the time varying dependency on the interaction of photon polarization, phase of the wavefront, and detector direction.
If you’d like to discuss this in more detail, I’d keep this thread alive for a while, as it’s an issue I’d like to clear up for myself.
(I’ll look up the paper when I have more time. EDIT—paper put in first post.)
Very cool paper. But I couldn’t understand the most important point. Can anyone help? When Jaynes says that (15) is the correct factorization instead of Bell’s (14), he gives up something, and I don’t understand what it is. What are the spooky conclusions that mainstream physicists wanted to avoid by working with (14) instead of (15)? I understand Jaynes’ point about Bell’s hidden assumptions (1) (bottom of page 12), and I agree with it. But I don’t understand what he says about hidden assumption (2).
We haven’t yet conclusively demonstrated non-locality, without making assumptions about detector behavior. (i.e., sophisticated adversarial detectors could replicate all data observed so far, even in a classical universe, by selectively dropping data points).
The state of affairs may change relatively soon, if we successfully design experiments that violate classical locality even given unreliable detectors. I’d bet that we will.
That was my understanding as well. Has anyone worked out complications in detection (detector bias) that would be required to account for the data?