I kind of doubt that lousy connection would make the timing signal arrive 18 meters late. That requires the signal being re-sent and arriving only on second, third, and so on attempt.
Maybe that could happen between consumer grade routers which have such complex algorithms as to have for all intent and purposes non-deterministic timing, but if they were using that to send time in this lab we don’t need to find lousy connections to discard the results. Scratch that, i don’t think this can happen with consumer grade router, that it would resend same packet at brighter and brighter light level until it gets through, or the like (then forget about the brightness and do it every time). The computers re-send stuff when it is TCP protocol, and when it is UDP they don’t, and who in their mind would use TCP for time anyway.
I kind of doubt that lousy connection would make the timing signal arrive 18 meters late. That requires the signal being re-sent and arriving only on second, third, and so on attempt.
That’s 18 meters for light in a vacuum. GPS receivers and lousy connections are not made out of vacuums. Translating into meters doesn’t help us all that much when we are considering hardware faults like this.
18 meters of air, ~12 meters of glass, ~36 meters of going through glass instead of air, and a lot of meters for going through faster glass rather than slower glass when connection is tightened (and don’t think of the signal bouncing at an angle and arriving slower, that’s not how fibre optics works). If they have a long glass cable, you can be certain that delay has to be actually measured because speed is temperature dependent.
The point of the conversion from nanoseconds to meters is to have some sort of intuitive reference just how much behind the signal must get via lousy connection.
18 meters of air, ~12 meters of glass, ~36 meters of going through glass instead of air, and a lot of meters for going through faster glass rather than slower glass when connection is tightened (and don’t think of the signal bouncing at an angle and arriving slower, that’s not how fibre optics works). If they have a long glass cable, you can be certain that delay has to be actually measured because speed is temperature dependent.
And in terms of the actual change in how far the light travels—quite possibly 0 meters.
The point of the conversion from nanoseconds to meters is to have some sort of intuitive reference just how much behind the signal must get via lousy connection.
It’s a recipe for intuitive confusion. We know there is a hardware fault of some kind due to connection difficulties. We don’t know the precise nature of the error introduced by the connection problem. There is more than one way a messed up connection could result in an electronic device giving input too slowly, few due to introducing more actual distance traveled and many giving absurd distance reading in the km for reasons entirely unrelated to the speed of light.
If you absolutely must use a distance metric to measure a fault with time reporting then I recommend adopting the unit “nano lightseconds”.
And I outlined those other ways—involving the packet loss and re-sending of old data.
The intuition here is hundred percent correct in ruling out any simple mechanistic explanations such as the gap introducing extra distance, the light bouncing off at angle, et cetera. Look what we achieved by converting to meters. We narrowed down the problem to connection protocol. Something has to be re-sending old packets, to introduce that kind of delay. Or, if they send pulses on every tick, which is probably not what they are doing—they must be counting pulses wrongly in presence of inevitable pulse loss. And on top of that they must be unaware of packet loss issue.
That is a much more severe problem than the under-tightening of connections by a contractor, in so much that it implies much higher degree of incompetence among the scientists. That is also, incidentally, significantly less likely.
By the way, they physically carried atomic clock around in one of the replications of experiment, which makes the linked article’s description of issue (the issue with gps to computer wire) entirely null and void.
Don’t take me wrong. I don’t believe in the faster than light neutrinos either. I would also bet money that it is an error. I however am aware that due to strong biases here the ‘explanations’ of the issue are likely to be of very low quality. Especially the ones that are so vague in attribution as “according to sources familiar with experiment”. And I’m not willing to agree with invalid reasoning by those who are explaining the error just because I agree with final conclusion.
Ahh, good. Much better than the article’s link. Didn’t re-read whole discussion. Maybe the cable was not working at all and the clock did not synchronize to GPS at all, or something equally silly. Doesn’t explain how it failed when they physically transported atomic clock.
60 nanoseconds ~= 60*30cm ~= 18 meters.
I kind of doubt that lousy connection would make the timing signal arrive 18 meters late. That requires the signal being re-sent and arriving only on second, third, and so on attempt.
Maybe that could happen between consumer grade routers which have such complex algorithms as to have for all intent and purposes non-deterministic timing, but if they were using that to send time in this lab we don’t need to find lousy connections to discard the results. Scratch that, i don’t think this can happen with consumer grade router, that it would resend same packet at brighter and brighter light level until it gets through, or the like (then forget about the brightness and do it every time). The computers re-send stuff when it is TCP protocol, and when it is UDP they don’t, and who in their mind would use TCP for time anyway.
That’s 18 meters for light in a vacuum. GPS receivers and lousy connections are not made out of vacuums. Translating into meters doesn’t help us all that much when we are considering hardware faults like this.
18 meters of air, ~12 meters of glass, ~36 meters of going through glass instead of air, and a lot of meters for going through faster glass rather than slower glass when connection is tightened (and don’t think of the signal bouncing at an angle and arriving slower, that’s not how fibre optics works). If they have a long glass cable, you can be certain that delay has to be actually measured because speed is temperature dependent.
The point of the conversion from nanoseconds to meters is to have some sort of intuitive reference just how much behind the signal must get via lousy connection.
And in terms of the actual change in how far the light travels—quite possibly 0 meters.
It’s a recipe for intuitive confusion. We know there is a hardware fault of some kind due to connection difficulties. We don’t know the precise nature of the error introduced by the connection problem. There is more than one way a messed up connection could result in an electronic device giving input too slowly, few due to introducing more actual distance traveled and many giving absurd distance reading in the km for reasons entirely unrelated to the speed of light.
If you absolutely must use a distance metric to measure a fault with time reporting then I recommend adopting the unit “nano lightseconds”.
And I outlined those other ways—involving the packet loss and re-sending of old data.
The intuition here is hundred percent correct in ruling out any simple mechanistic explanations such as the gap introducing extra distance, the light bouncing off at angle, et cetera. Look what we achieved by converting to meters. We narrowed down the problem to connection protocol. Something has to be re-sending old packets, to introduce that kind of delay. Or, if they send pulses on every tick, which is probably not what they are doing—they must be counting pulses wrongly in presence of inevitable pulse loss. And on top of that they must be unaware of packet loss issue.
That is a much more severe problem than the under-tightening of connections by a contractor, in so much that it implies much higher degree of incompetence among the scientists. That is also, incidentally, significantly less likely.
By the way, they physically carried atomic clock around in one of the replications of experiment, which makes the linked article’s description of issue (the issue with gps to computer wire) entirely null and void.
Don’t take me wrong. I don’t believe in the faster than light neutrinos either. I would also bet money that it is an error. I however am aware that due to strong biases here the ‘explanations’ of the issue are likely to be of very low quality. Especially the ones that are so vague in attribution as “according to sources familiar with experiment”. And I’m not willing to agree with invalid reasoning by those who are explaining the error just because I agree with final conclusion.
There’s a CERN press release about that.
Ahh, good. Much better than the article’s link. Didn’t re-read whole discussion. Maybe the cable was not working at all and the clock did not synchronize to GPS at all, or something equally silly. Doesn’t explain how it failed when they physically transported atomic clock.