I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn’t a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn’t being produced at much faster than the speed of light but only a tiny bit over.
I’m going to now proceed to list some of the 5%. I don’t know enough to discuss their likelyhood in detail.
1) Neutrinos oscillating into a tachyonic form. This seems extremely unlikely. I’m not completely sure, but I think this would violate CPT among other things.
2) Neutrinos oscillating into a sterile neutrino that is able to travel along another dimension. We can approximately bound the number of neutrino types by around 6 (this extends from the SN 1987A data and solar neutrino data).
Both 1 and 2 require extremely weird situations where neutrinos have a probability of oscillating into a specific form with an extremely low probability but have a high probability of oscillating away from it. (If the probability to go to this form were high we would have seen it in the solar neutrino deficiency.) These both have the nice part of potentially explaining dark matter also.
3) Photons have mass, and we need to distinguish between the speed of light and c in SR. The actual value of c in SR is slightly higher than what photons generally travel at, so high energy very low mass particles can travel faster than the speed of light but not faster than c. This runs into a lot of problems, such as the fact that a lot of SR can be derived from Maxwell’s equations and some reasonable assumptions about conservation, symmetry and reference frames. So the speed of light should be the actual value showing up in SR.
One other thing to note that hasn’t gotten a lot of press- if neutrinos regularly do this we should have seen the SN 1987A neutrinos years before the light arrived, rather than just a few hours before. This is evidence against. But this is only weak evidence since the early neutrino detectors were weak enough that this sort of thing could have been conceivably missed. Moreover, the Mont Blanc detector did detect a burst of neutrinos a few hours before SN 1987A before the main burst. This is generally considered to be a statistical fluke. But, nother detectors could potentially have been neutrinos traveling faster than the speed of light. Problem with this: Why would none of the other detectors have also gotten that early burst? Second problem: If this were the case the early SN 1987A neutrinos might be still traveling faster than light but it would be much much slower than the claim here. This claim amounts to neutrinos traveling on the order of 1⁄10,000 to 1⁄40,000 of c faster than they should. The Mont Blanc thing would require them traveling faster on the order of a (10^-9)c faster than they should.
The main problem with 3) is that if photons have mass, then we would observe differences in speed of light depending on energy at least as big as the difference measured now for neutrinos. This seems not to be the case and c is measured with very high accuracy. If photons traveled with some velocity lower than c, but constant independent of energy, that would violate special relativity.
Yes, but we almost always measure c precisely using light near the visible spectrum. Rough estimates were first made based on the behavior of Jupiter and Saturn’s moons (their eclipses occurred slightly too soon when the planets were near Earth and slightly too late when they were far from Earth).
Variants of a Foucault apparatus are still used and that’s almost completely with visible light or near visible light.
One can also use microwaves to do clever stuff with cavity resonance. I’m not sure if there would be a noticeable energy difference.
The ideal thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I’m not aware of any experiments that do that.
The experimental upper bound on photon mass is 10^-18 eV. The photons near visible spectrum have about 10^-3 eV, which means their relative deviation from c is of order 10^-30. Gamma would be even closer. I don’t think mass of photon is measurable via speed of light.
The idea thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I’m not aware of any experiments that do that.
Err… build a broad spectrum telescope and look at an unstable stellar entity?
That’s an interesting idea. But the method one detects gamma rays or x-rays is very different than what one uses to detect light, so calibrating would be tough. And most unstable events take place over time, so this would be really tough. Look at for example a supernova- even the neutrino burst lasts on the order of tens of seconds. Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I’m not sure, would need to crunch the numbers. It certainly is an interesting idea.
Hmm, what about actively racing them? Same method as yours but closer in. Set off a fusion bomb (which we understand really well) far away (say around 30 or 40 AU out). That will be on the order of a few light hours which might be enough to see a difference if one knew then that everything had to start at the exact same time.
Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I’m not sure, would need to crunch the numbers.
Short answer: The numbers come out in the ballpark of hours not seconds.
Hmm, what about actively racing them? Same method as yours but closer in.
Being closer in relies on trusting your engineering competence to be able to calibrate your devices well. Do it based off interstellar events and you just need to go “Ok, this telescope went bleep at least a few minutes before that one” then start scribbling down math. I never trust my engineering over my physics.
Not necessarily. (Disclaimer: Physics background but this is not my area of expertise; I am working from memory of courses I took >5 years ago). In electroweak unification, there are four underlying gauge fields, superpositions of which make up the photon, W bosons, and Z boson. You have to adjust the coefficients of the combinations very carefully to make the photon massless and the weak bosons heavy. You could adjust them slightly less carefully and have an extremely light, but not massless, photon, without touching the underlying gauge fields; then you can derive Maxwell and whatnot using the gauge fields instead of the physical particles, and presumably save SR as well.
Observe that the current experimental upper limit on the photon mass (well, I say current—I mean, the first result that comes up in Google; it’s from 2003, but not many people bother with experimental bounds on this sort of thing) is 7x10^{-19} eV, or what we call in teknikal fiziks jargon “ridiculously tiny”.
SR doesn’t depend on behaviour of gauge fields. Special relativity is necessary to have a meaningful definition of “particle” in field theory. The gauge fields have to have zero mass term because of gauge invariance, not Lorentz covariance. The mass is generated by interaction with Higgs particle, this is essentially a trick which lets you forget gauge invariance after the model is postulated. It doesn’t impose any requirements on SR either.
I was thinking of how Lorentz invariance was historically arrived at: From Maxwell’s equations. If the photon has mass, then presumably Maxwell does not exactly describe its behaviour (although with the current upper bound it will be a very good approximation); but the underlying massless gauge field may still follow Maxwell.
First we may clarify what is exactly meant by “following Maxwell”. For example in electrodynamics (weak interaction switched off) there is interaction between electron field and photons. Is this Maxwell? Classical Maxwell equations include the interaction of electromagnetic field and current and charge densities, but they don’t include equation of motion for the charges. Nevertheless, we can say that in quantum electrodynamics
photon obeys Maxwell, because the electrodynamics Lagrangian is identical to the classical Lagrangian which produces Maxwell equations (plus equations of motion for the charges)
photon doesn’t obey Maxwell, because due to quantum corrections there is an extremely weak photon self-interaction, which is absent in classical Maxwell.
See that the problem has nothing to do with masses (photons remain massless in QED), Glashow-Weinberg-Salam construction of electroweak gauge theory or Higgs boson. The apparent Maxwell violation (here, scattering of colliding light beams) arise because on quantum level one can’t prevent the electron part of the Lagrangian from influencing the outcome even if there are no electrons in the initial and final state. Whether or not is this viewed as Maxwell violation is rather choice of words. The electromagnetic field still obeys equations which are free Maxwell + interaction with non-photon fields, but there are effects which we don’t see in the classical case. Also, those violations of Maxwell are perfectly compatible with Lorentz covariance.
In the case of vector boson mass generation, one may again formulate it in two different ways:
the vector boson follows Maxwell, since it obeys equations which are free Maxwell + interaction with Higgs
it doesn’t follow Maxwell, because the interaction with Higgs manifests itself as effective mass
Again this is mere choice of words.
Now you mentioned the linear combinations of non-physical gauge fields which give rise to physical photon and weak interaction bosons. The way you put it it seems that the underlying fields, which correspond to U(1) and SU(2) gauge group generators, are massless and the mass arises somehow in the process of combining them together. This is not the case. The underlying fields all interact with Higgs and therefore are all massive. Even if the current neutrino affair lead to slight revision of photon masslessness, the underlying fields would be “effectively massive” by interaction with Higgs (I put “effectively massive” in quotes because it’s pretty weird to speak about effective properties of fields which are not measurable).
Of course, your overall point is true—there is no fundamental reason why photon couldn’t obtain a tiny mass by the Higgs mechanism. Photon masslessness isn’t a theoretical prediction of the SM.
I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn’t a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn’t being produced at much faster than the speed of light but only a tiny bit over.
I’m going to now proceed to list some of the 5%. I don’t know enough to discuss their likelyhood in detail.
1) Neutrinos oscillating into a tachyonic form. This seems extremely unlikely. I’m not completely sure, but I think this would violate CPT among other things.
2) Neutrinos oscillating into a sterile neutrino that is able to travel along another dimension. We can approximately bound the number of neutrino types by around 6 (this extends from the SN 1987A data and solar neutrino data).
Both 1 and 2 require extremely weird situations where neutrinos have a probability of oscillating into a specific form with an extremely low probability but have a high probability of oscillating away from it. (If the probability to go to this form were high we would have seen it in the solar neutrino deficiency.) These both have the nice part of potentially explaining dark matter also.
3) Photons have mass, and we need to distinguish between the speed of light and c in SR. The actual value of c in SR is slightly higher than what photons generally travel at, so high energy very low mass particles can travel faster than the speed of light but not faster than c. This runs into a lot of problems, such as the fact that a lot of SR can be derived from Maxwell’s equations and some reasonable assumptions about conservation, symmetry and reference frames. So the speed of light should be the actual value showing up in SR.
One other thing to note that hasn’t gotten a lot of press- if neutrinos regularly do this we should have seen the SN 1987A neutrinos years before the light arrived, rather than just a few hours before. This is evidence against. But this is only weak evidence since the early neutrino detectors were weak enough that this sort of thing could have been conceivably missed. Moreover, the Mont Blanc detector did detect a burst of neutrinos a few hours before SN 1987A before the main burst. This is generally considered to be a statistical fluke. But, nother detectors could potentially have been neutrinos traveling faster than the speed of light. Problem with this: Why would none of the other detectors have also gotten that early burst? Second problem: If this were the case the early SN 1987A neutrinos might be still traveling faster than light but it would be much much slower than the claim here. This claim amounts to neutrinos traveling on the order of 1⁄10,000 to 1⁄40,000 of c faster than they should. The Mont Blanc thing would require them traveling faster on the order of a (10^-9)c faster than they should.
The main problem with 3) is that if photons have mass, then we would observe differences in speed of light depending on energy at least as big as the difference measured now for neutrinos. This seems not to be the case and c is measured with very high accuracy. If photons traveled with some velocity lower than c, but constant independent of energy, that would violate special relativity.
Yes, but we almost always measure c precisely using light near the visible spectrum. Rough estimates were first made based on the behavior of Jupiter and Saturn’s moons (their eclipses occurred slightly too soon when the planets were near Earth and slightly too late when they were far from Earth).
Variants of a Foucault apparatus are still used and that’s almost completely with visible light or near visible light.
One can also use microwaves to do clever stuff with cavity resonance. I’m not sure if there would be a noticeable energy difference.
The ideal thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I’m not aware of any experiments that do that.
The experimental upper bound on photon mass is 10^-18 eV. The photons near visible spectrum have about 10^-3 eV, which means their relative deviation from c is of order 10^-30. Gamma would be even closer. I don’t think mass of photon is measurable via speed of light.
Err… build a broad spectrum telescope and look at an unstable stellar entity?
That’s an interesting idea. But the method one detects gamma rays or x-rays is very different than what one uses to detect light, so calibrating would be tough. And most unstable events take place over time, so this would be really tough. Look at for example a supernova- even the neutrino burst lasts on the order of tens of seconds. Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I’m not sure, would need to crunch the numbers. It certainly is an interesting idea.
Hmm, what about actively racing them? Same method as yours but closer in. Set off a fusion bomb (which we understand really well) far away (say around 30 or 40 AU out). That will be on the order of a few light hours which might be enough to see a difference if one knew then that everything had to start at the exact same time.
Short answer: The numbers come out in the ballpark of hours not seconds.
Being closer in relies on trusting your engineering competence to be able to calibrate your devices well. Do it based off interstellar events and you just need to go “Ok, this telescope went bleep at least a few minutes before that one” then start scribbling down math. I never trust my engineering over my physics.
Photons having mass would screw up the Standard Model too… right?
Not necessarily. (Disclaimer: Physics background but this is not my area of expertise; I am working from memory of courses I took >5 years ago). In electroweak unification, there are four underlying gauge fields, superpositions of which make up the photon, W bosons, and Z boson. You have to adjust the coefficients of the combinations very carefully to make the photon massless and the weak bosons heavy. You could adjust them slightly less carefully and have an extremely light, but not massless, photon, without touching the underlying gauge fields; then you can derive Maxwell and whatnot using the gauge fields instead of the physical particles, and presumably save SR as well.
Observe that the current experimental upper limit on the photon mass (well, I say current—I mean, the first result that comes up in Google; it’s from 2003, but not many people bother with experimental bounds on this sort of thing) is 7x10^{-19} eV, or what we call in teknikal fiziks jargon “ridiculously tiny”.
SR doesn’t depend on behaviour of gauge fields. Special relativity is necessary to have a meaningful definition of “particle” in field theory. The gauge fields have to have zero mass term because of gauge invariance, not Lorentz covariance. The mass is generated by interaction with Higgs particle, this is essentially a trick which lets you forget gauge invariance after the model is postulated. It doesn’t impose any requirements on SR either.
I was thinking of how Lorentz invariance was historically arrived at: From Maxwell’s equations. If the photon has mass, then presumably Maxwell does not exactly describe its behaviour (although with the current upper bound it will be a very good approximation); but the underlying massless gauge field may still follow Maxwell.
First we may clarify what is exactly meant by “following Maxwell”. For example in electrodynamics (weak interaction switched off) there is interaction between electron field and photons. Is this Maxwell? Classical Maxwell equations include the interaction of electromagnetic field and current and charge densities, but they don’t include equation of motion for the charges. Nevertheless, we can say that in quantum electrodynamics
photon obeys Maxwell, because the electrodynamics Lagrangian is identical to the classical Lagrangian which produces Maxwell equations (plus equations of motion for the charges)
photon doesn’t obey Maxwell, because due to quantum corrections there is an extremely weak photon self-interaction, which is absent in classical Maxwell.
See that the problem has nothing to do with masses (photons remain massless in QED), Glashow-Weinberg-Salam construction of electroweak gauge theory or Higgs boson. The apparent Maxwell violation (here, scattering of colliding light beams) arise because on quantum level one can’t prevent the electron part of the Lagrangian from influencing the outcome even if there are no electrons in the initial and final state. Whether or not is this viewed as Maxwell violation is rather choice of words. The electromagnetic field still obeys equations which are free Maxwell + interaction with non-photon fields, but there are effects which we don’t see in the classical case. Also, those violations of Maxwell are perfectly compatible with Lorentz covariance.
In the case of vector boson mass generation, one may again formulate it in two different ways:
the vector boson follows Maxwell, since it obeys equations which are free Maxwell + interaction with Higgs
it doesn’t follow Maxwell, because the interaction with Higgs manifests itself as effective mass
Again this is mere choice of words.
Now you mentioned the linear combinations of non-physical gauge fields which give rise to physical photon and weak interaction bosons. The way you put it it seems that the underlying fields, which correspond to U(1) and SU(2) gauge group generators, are massless and the mass arises somehow in the process of combining them together. This is not the case. The underlying fields all interact with Higgs and therefore are all massive. Even if the current neutrino affair lead to slight revision of photon masslessness, the underlying fields would be “effectively massive” by interaction with Higgs (I put “effectively massive” in quotes because it’s pretty weird to speak about effective properties of fields which are not measurable).
Of course, your overall point is true—there is no fundamental reason why photon couldn’t obtain a tiny mass by the Higgs mechanism. Photon masslessness isn’t a theoretical prediction of the SM.
Ok, I sit corrected. This is what happens when an experimentalist tries to remember his theory courses. :)