[link] Faster than light neutrinos due to loose fiber optic cable.
A mundane cause for a surprising result. Consider this unconfirmed for now, however unsurprising it sounds.
According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer. After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed. Since this time is subtracted from the overall time of flight, it appears to explain the early arrival of the neutrinos.
New data, however, will be needed to confirm this hypothesis.
Source: Science/AAAS
Update: It seems OPERA has made an official statement.
Source: Nature
So, how much money is Eliezer winning from bets?
$202.020202 at the most. (The risk of $20000 times the odds of 1:99)
OPERA in Question February 23, 2012
You missed this:
It’s apparently even more complicated than that:
Official Word on Superluminal Neutrinos Leaves Warp-Drive Fans a Shred of Hope—Barely
News are mostly noise. I think LW shouldn’t get distracted, there are other places where their discussion is more appropriate.
I agree in general, but not as applied to this specific case. This is a major issue in science and/or epistemic rationality, about which prominent LWers have made bets, as noted below. News pertaining to “case studies” of this sort should always be welcome.
By making this complaint now and not earlier, you are promoting publication bias.
Could someone expand on this? I don’t get it.
I have a possible explanation. However I have very little confidence in it and primarily proffer it for entertainment value, since I think Douglas_Knight’s post may have been a joke.
There have been 4 opera threads I’ve noted. This one, and 3 previous.
http://lesswrong.com/lw/8hh/opera_confirms_neutrinos_travel_faster_than_light/ (November, XiXiDu)
http://lesswrong.com/r/discussion/lw/83c/particles_may_not_have_broken_light_speed_limit/ (October, Me)
http://lesswrong.com/lw/7rc/particles_break_lightspeed_limit/ (September, Kevin)
September was the initial thread (They may be breaking light speed?) No near top comment that I found (I did not check sub threads). October was a possible counter argument thread. Vladimir objected to me posting this thread. For more than one reason, and I appreciated his critique. November was the additional confirming evidence thread. No near top comment that I found (I did not check sub threads) February is another counter argument thread. Vladimir objected to the posting of this thread.
This is just barely enough evidence for accusing Vladimir of publication bias to be funny (you’ve objected to 2 retractions and you haven’t objected to 2 confirmations!), but not enough to be taken seriously. And humor tends to be upvoted well. I think.
That being said, I would not be surprised to be TOTALLY off base on Douglas_Knight’s comment, since this is just my Internal Narrator trying to mash together evidence for a hypothesis to explain what I think is a joke. If for instance, he wasn’t joking, this is all just a silly theory.
Other thoughts to consider:
How many people are going to read through all of those threads just to make a stats joke? Apparently, this includes myself.
If enough of the same people think “Opera Anomaly? I want to read this!” They may have been subconciously aware of Vladimir_Nesov’s earlier comments. When Douglas Knight completed the loop out loud, it may have been funny, since some humor tends to contain a “Complete this pattern that I am aware of but have not actually verbalized!” format.
That may have been too much expansion. But on the upside, there are also links to previous Opera threads. Feel free to come up with a better (or more entertaining) hypothesis of Douglas_Knight’s remarks based on that reading.
I don’t see how discussion of current events would harm the community. Frankly we haven’t got enough to talk about. If you’re not interested, don’t click.
This works only to some extent, otherwise throwing all of the Internet in one big cauldron and stirring a bit would make no difference. The value of a forum is in the selection of its conversations, and while any given choice may be insignificant, that doesn’t argue for some particular way of resolving it, and inability to make such determinations may well add up to accumulated loss in quality.
It’s not a picture of a cute kitten, for crying out loud. Nor is it a gateway drug or a frictionless slippery slope. There IS a bid difference between a random news item and new information about something we have already had epistemic discussions about. There are even new considerations now about how much wish should be wary of confirmation bias when handling this evidence.
That “some extent” more than covers this page. So if you are not interested, really, don’t click.
If it relates to other on-topic discussions, then there should be links to them in the post. Without that, it’s just an off-topic post.
I’d agree with that. I remember all the rationality related discussions so it fits together with me and I barely read it beyond the bare factoid. But if I were to make a post on it I would definitely include the relevant links and more than a token allusion to the considerations on how biases may relate.
From what exactly? What discussion have we been having in the last day or so that I missed out on by reading this for 20 seconds? Or, in the counterfactual world where nobody posted this or was distracted by reading it what is the expected caliber of post that some undistracted viewer would otherwise have written?
Things other than discussions on LW, the measure of its comparative quality. I agree that this doesn’t clearly apply to this particular case, I was talking more about news in general.
Yes, news in general is boring. I avoid news enough that this post is the first I heard (and only likely source) about this subject.
For example I could have spent that 20 seconds researching new conversation material so I don’t lose nerd status when I meet with my engineer friends? Oh.
That’s a fairly big goalpost switch by the way. You went from saying lesswrong should be distracted by this and it should be talked about elsewhere to relying on lesswrong itself being for most intents and purposes a waste of time relative to actually living. In fact, since your initial move was to place this topics outside the scope of lesswrong the comparison ends up being, among other things, between reading a lesswrong spin on neutrinos-speed research and discussing the same subject somewhere else where the relevant biases related to updating on this kind of research wouldn’t even be comprehended.
Disagree!
One more example of misapplied statistics. You see a 5-standard-deviation signal for faster-than-light neutrinos and reason like this:
“Well, prior to seeing these data I would rate the odds of neutrinos traveling FTL as, say, 1:1000, but this is full 5 standard deviations so the likelihood ratio is about 3.5 million for them going FTL after all, so I must revise my belief and now accept that those neutrinos travel FTL with ~3500:1 odds.”
...which, of course, only happens because you have oversimplified by only considering two hypotheses. Whereas in reality you should also have thrown in some other possibilities, like e.g. some undiscovered flaw in the measurements. Which I would, prior to seeing the data, assign a higher probability, say, odds of 1:100 (a mere 1% probability of faulty experiment, just to be generous to the experimenters). Now after seeing the data we also have to revise this probability, and it comes out as ~35000:1 odds for faulty experimental design.
In other words, the more “statistically significant” is the result in such an experiment, the more it is evidence for faulty measurement and against the experimenters’ claim (here, FTL neutrinos).
It’s still evidence for the claims, just also evidence for the experiment being faulty.
Technically, when viewed on the log-odds scale, it is exactly the same amount of evidence for either hypothesis.
On the (0;1) scale of probabilities it is, however, stronger evidence for flawed experiment. E.g. if we had a null hypothesis H0 at a prior P of 0.25 and two hypotheses H1 and H2 at 0.25 and 0.5, respectively, and we see evidence that has a likelihood of 5:1 for each of these hypotheses over H0. Then we have posterior P(H0|D)/P(H1|D)=1/5 and P(H0|D)/P(H2|D)=1/10, and after normalization, P(H0)=1/16, P(H1)=5/16, P(H2)=10/16. So, in this situation, P(H2) has increased by 2⁄16 and P(H1) only by 1⁄16. Which is what I meant in my comment: the probability of the more likely hypothesis increases faster, so H2 becomes more more probable than H1 :) Although on the log-odds scale, of course, both H1 and H2 received the same amount of evidence.
They didn’t try using a different clock in the replication?
I get the impression that there is a huge infrastructure just for measuring the time, and that it’s easier to check every part of your apparatus ten times over than to build a whole new clock.
Reasonable discussion of how the anomaly was handled
This just in! Scientists prove that loosening cables causes FTL neutrinos! ;p
How long do you think it will take for CERN to officially acknowledge this problem—assuming it is correct?
60 nanoseconds ~= 60*30cm ~= 18 meters.
I kind of doubt that lousy connection would make the timing signal arrive 18 meters late. That requires the signal being re-sent and arriving only on second, third, and so on attempt.
Maybe that could happen between consumer grade routers which have such complex algorithms as to have for all intent and purposes non-deterministic timing, but if they were using that to send time in this lab we don’t need to find lousy connections to discard the results. Scratch that, i don’t think this can happen with consumer grade router, that it would resend same packet at brighter and brighter light level until it gets through, or the like (then forget about the brightness and do it every time). The computers re-send stuff when it is TCP protocol, and when it is UDP they don’t, and who in their mind would use TCP for time anyway.
That’s 18 meters for light in a vacuum. GPS receivers and lousy connections are not made out of vacuums. Translating into meters doesn’t help us all that much when we are considering hardware faults like this.
18 meters of air, ~12 meters of glass, ~36 meters of going through glass instead of air, and a lot of meters for going through faster glass rather than slower glass when connection is tightened (and don’t think of the signal bouncing at an angle and arriving slower, that’s not how fibre optics works). If they have a long glass cable, you can be certain that delay has to be actually measured because speed is temperature dependent.
The point of the conversion from nanoseconds to meters is to have some sort of intuitive reference just how much behind the signal must get via lousy connection.
And in terms of the actual change in how far the light travels—quite possibly 0 meters.
It’s a recipe for intuitive confusion. We know there is a hardware fault of some kind due to connection difficulties. We don’t know the precise nature of the error introduced by the connection problem. There is more than one way a messed up connection could result in an electronic device giving input too slowly, few due to introducing more actual distance traveled and many giving absurd distance reading in the km for reasons entirely unrelated to the speed of light.
If you absolutely must use a distance metric to measure a fault with time reporting then I recommend adopting the unit “nano lightseconds”.
And I outlined those other ways—involving the packet loss and re-sending of old data.
The intuition here is hundred percent correct in ruling out any simple mechanistic explanations such as the gap introducing extra distance, the light bouncing off at angle, et cetera. Look what we achieved by converting to meters. We narrowed down the problem to connection protocol. Something has to be re-sending old packets, to introduce that kind of delay. Or, if they send pulses on every tick, which is probably not what they are doing—they must be counting pulses wrongly in presence of inevitable pulse loss. And on top of that they must be unaware of packet loss issue.
That is a much more severe problem than the under-tightening of connections by a contractor, in so much that it implies much higher degree of incompetence among the scientists. That is also, incidentally, significantly less likely.
By the way, they physically carried atomic clock around in one of the replications of experiment, which makes the linked article’s description of issue (the issue with gps to computer wire) entirely null and void.
Don’t take me wrong. I don’t believe in the faster than light neutrinos either. I would also bet money that it is an error. I however am aware that due to strong biases here the ‘explanations’ of the issue are likely to be of very low quality. Especially the ones that are so vague in attribution as “according to sources familiar with experiment”. And I’m not willing to agree with invalid reasoning by those who are explaining the error just because I agree with final conclusion.
There’s a CERN press release about that.
Ahh, good. Much better than the article’s link. Didn’t re-read whole discussion. Maybe the cable was not working at all and the clock did not synchronize to GPS at all, or something equally silly. Doesn’t explain how it failed when they physically transported atomic clock.
This might have done better as a comment in the previous threads. Or it could have linked to them.