Particles break light-speed limit?
http://www.nature.com/news/2011/110922/full/news.2011.554.html
http://arxiv.org/abs/1109.4897v1
http://usersguidetotheuniverse.com/?p=2169
http://news.ycombinator.com/item?id=3027056
Ereditato says that he is confident enough in the new result to make it public. The researchers claim to have measured the 730-kilometre trip between CERN and its detector to within 20 centimetres. They can measure the time of the trip to within 10 nanoseconds, and they have seen the effect in more than 16,000 events measured over the past two years. Given all this, they believe the result has a significance of six-sigma — the physicists’ way of saying it is certainly correct. The group will present their results tomorrow at CERN, and a preprint of their results will be posted on the physics website ArXiv.org.
At least one other experiment has seen a similar effect before, albeit with a much lower confidence level. In 2007, the Main Injector Neutrino Oscillation Search (MINOS) experiment in Minnesota saw neutrinos from the particle-physics facility Fermilab in Illinois arriving slightly ahead of schedule. At the time, the MINOS team downplayed the result, in part because there was too much uncertainty in the detector’s exact position to be sure of its significance, says Jenny Thomas, a spokeswoman for the experiment. Thomas says that MINOS was already planning more accurate follow-up experiments before the latest OPERA result. “I’m hoping that we could get that going and make a measurement in a year or two,” she says.
Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone? I’d be curious to see your probability estimates for whether this theory pans out. Or other crackpot hypotheses to explain the results.
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- OPERA Confirms: Neutrinos Travel Faster Than Light by 18 Nov 2011 9:58 UTC; 12 points) (
- 3 Oct 2011 19:55 UTC; 10 points) 's comment on Rationality Quotes October 2011 by (
- 23 Feb 2012 15:42 UTC; 7 points) 's comment on [link] Faster than light neutrinos due to loose fiber optic cable. by (
- Particles may not have broken light speed limit by 17 Oct 2011 19:00 UTC; 3 points) (
From an actual physicist:
Yes, but what would he want as the opposing wager? I’ll gladly put up a cent (or, for that matter, $10,000,000,000,000 ZWR) against his house, while I wouldn’t consider betting $10,000.
I’ll take bets at 99-to-1 odds against any information propagating faster than c. Note that this is not a bet for the results being methodologically flawed in any particular way, though I would indeed guess some simple flaw. It is just a bet that when the dust settles, it will not be possible to send signals at a superluminal velocity using whatever is going on—that there will be no propagation of any cause-and-effect relation at faster than lightspeed.
My real probability is lower, but I think that anyone who’d bet against me at 999-to-1 will probably also bet at 99-to-1, so 99-to-1 is all I’m offering.
I will not accept more than $20,000 total of such bets.
I’ll take that bet, for a single pound on my part against 99 from Eliezer.
(explanation: I have a 98-2 bet with my father against the superluminal information propagation being true, so this sets up a nice little arbitrage).
Is that c the speed of light in vacuum or c the constant in special relativity?
c is the constant as it appears in fundamental physical equations, relativistic or quantum. Anything slowing down the propagation of photons through an apparent vacuum (such as interaction with dark matter) which did not affect, for example, the mass-energy equivalence of E=MC2, would not win the bet.
If Kevin doesn’t go through with taking that bet for $202, I’ll take it for $101.
I suggest clarifying the bet to say “information propagating faster than c as c is defined at the time of this bet”. With that clarification, I can pay up front in cash for $202 as soon as possible.
There are many definitions of c—it appears as a constant in many different physical equations. Right now, all of these definitions are consistent. If you have a new physics where all these definitions remain consistent and you can still transmit information faster than c, then certainly I have lost the bet. Other cases would be harder to settle—I did state that weird physics along the lines of “this is why photons are slowed down in a vacuum by dark matter, but neutrinos aren’t slowed” wouldn’t win the bet.
Actually, what is the worst that could happen? It’s not [the structure of the universe is destabilized by the breakdown of causality], because that would have already happened if it were going to.
The obvious one would be [Eliezer loses $20,000], except that would only occur in the event that it were possible to violate causality, in which case he would presumably arrange to prevent his past self from making the bet in the first place, yeah? So really, it’s a win-win.
Unless one of the people betting against him is doing so because ve received a mysterious parchment on which was written, in ver own hand, “MESS WITH TIME.”
If there are ways to violate causality they are likely restrictive enough that we can’t use them to violate causality prior to when we knew about the methods (roughly). This is true for most proposed causality violating mechanisms. For example, you might be able to violate causality with a wormhole, but you can’t do it to any point in spacetime prior to the existence of the wormhole.
In general, if there are causality violating mechanisms we should expect that they can’t violate causality so severely as to make the past become radically altered since we just don’t see that. It is conceivable that such manipulation is possible but that once we find an effective method of violating causality we will be quickly wiped out (possibly by bad things related to the method itself) but this seems unlikely even assuming one already has a causality violating mechanism.
Mostly agree. Would downgrade to “can’t or won’t”. Apart from a little more completeness the difference makes a difference to anthropic considerations.
Does it even make sense to say “won’t”, or for that matter bring up anthropic considerations, in reference to causality violation?
This is a serious question, I don’t know the answer.
I’m not sure. If a universe allows sufficient causality violation then it may be that it will be too unstable for observers to arise in that universe. But I’m not sure about that. This may be causality chauvinism.
(I feel like there’s a joke to be made here, something to do with “causality chauvinism”, “causality violation”, “too unstable for observers to arise”, the relative “looseness” of time travel rules, maybe also the “Big Bang”… it’s on the tip of my brain… nah, I got nothing.)
Yes. (Leave out the anthropics, when that makes sense to bring up is complicated.)
Most of the reason for saying:
… are somewhat related to “causality doesn’t appear to be violated”. If (counterfactually) causality can be violated then it seems like it probably hasn’t happened yet. This makes it a lot more likely that causality violations (like wormholes and magic) that are discovered in the future will not affect things before their discovery. This includes the set of (im)possible worlds in which prior-to-the-magic times cannot be interfered with and also some other (im)possible worlds in which it is possible but doesn’t happen because it is hard.
An example would be faster-than-light neutrinos. It would be really damn hard to influence the past significantly with such neutrinos with nothing set up to catch them. It would be much easier to set up a machine to receive messages from the future.
It may be worth noting that “causality violation” does not imply “complete causality meltdown”. The latter would definitely make “won’t” rather useless.
Well, it’s just… how could you tell? I mean, maybe the angel that told Colombo to sail west was a time-travelling hologram sent to avert the Tlaxcalan conquest of Europe.
Well yes, I understand you probably couldn’t use faster-than-light neutrinos from the future (FTLNFTFs) to effect changes in the year 1470 any more easily or precisely than, say, creating an equivalent neutrino burst to 10^10^9999 galaxies going supernova simultaneously one AU from Earth, presumably resulting in the planet melting or some such thing, I don’t know.
However, elsewhere in this thread I suggested a method that takes advantage of a system that already exists and is set up to detect neutrinos (admittedly not FTLNFTFs specifically, though I don’t know why that should matter). I still don’t see exactly what prevents Eliezer_2831 from fiddling around with MINOS’s or CERN’s observations in a causality-violating but not-immediately-obvious manner.
Other than, you know, basic human decency.
We obviously can’t with certainty. But we can say it is highly unlikely. The universe looks to us like it has a consistent causal foundation rather than being riddled with arbitrary causality violations. That doesn’t make isolated interventions impossible, just unlikely.
Overwhelming practical difficulties. To get over 800 years of time travel in one hop using neutrinos going very, very slightly faster than light the neutrinos would have to be shot from a long, long way away. Getting a long, long, way away takes time and is only useful if you are traveling close enough to the speed of light that on the return trip the neutrinos gain more time than what you spent travelling. Eliezer_2831 would end up on the other side of the universe somewhere and the energy required to shoot enough neutrinos to communicate over that much distance would be enormous. The scope puts me in mind of the Tenth Doctor: “And it takes a lot of power to send this projection— I’m in orbit around a supernova. [smiling weakly] I’m burning up a sun just to say goodbye.”
I’m not sure if that scenario is more or less difficult than the remote neutrino manufacturing scenario. The engineering doesn’t sound easy but once it is done once any time before heat death of the universe you just win. You can send anything back to (almost) any time.
Unless you’re fighting Photino Birds.
But that’s pretty unlikely, yeah.
That sounds like it’s a reference to something awesome. Is it?
Fairly awesome, I’d say.
In the context of almost every proposed causality violation mechanism I’ve seen seriously discussed, it really is can’t, not won’t. Wormholes aren’t the only example. Tipler Cylinders for example don’t allow time travel prior to the point when they started rotating. Godel’s rotating universe has similar restrictions. Is there some time travel proposal I’m missing?
I agree that when considering anthropic issues won’t becomes potentially relevant if we had any idea that time travel could potentially allow travel prior to the existence of the device in question. In that case, I’d actually argue in the other direction: if such machines could exist, I’d expect to see massive signs of such interference in the past.
There are plenty of mechanisms in which can’t applies. There are others which don’t have that limitation. I don’t even want to touch what qualifies as ‘seriously discussed’. I’m really not up to date with which kinds of time travel are high status.
Ignore status issues. Instead focus on time travel mechanisms that don’t violate SR. Are there any such mechanisms which allow such violation before the time travel device has been constructed? I’m not aware of any.
Alcubierre drives.
I’m pretty sure—not totally sure, I’m perfectly willing to be corrected by anyone with more knowledge of the physics than me, but still, pretty sure—that the stated objection would not preclude The Future from sending back time-travelling neutrinos to, say, the Main Injector Neutrino Oscillation Search in a pattern that spells out the Morse code for T-E-L-L—E-Y—D-N-M-W-T, possibly even in such a way that they wouldn’t figure out the code until after CERN’s results were published.
This would be really difficult. The primary problem is that neutrinos don’t interact with most things, so to send a signal you’d need to send a massive burst of neutrinos to the point where we should expect it to show up on other neutrino detectors also. The only plausible way this might work is if someone used a system at CERN, maybe the OPERA system itself in a highly improved and calibrated form to send the neutrinos back.
Although if neutrinos can go back in time then so much of physics may be wrong that this sort of speculation is likely to be extremely unlikely to be at all helpful. This is almost like going to an 17th century physicist and asking them to speculate what things would be like if nothing could travel faster than the speed of light.
Yeah, see, I’m not betting against random cool new physics, I wouldn’t offer odds like that on there not being a Higgs boson, I’m betting on the local structure of causality. Could I be wrong? Yes, but if I have to pay out that entire bet, it won’t be the most interesting thing that happened to me that day.
How confident am I of this? Not just confident to offer to bet at 99-to-1 odds. Confident enough to say...
“Well, that was an easy, risk-free $202.”
Or to put it even more plainly:
The consequence of the FTL neutrinos CERN thinks they found at six sigma significance is not the breakdown of causality. You can have faster than light neutrinos without backwards propagation of information. This is not the end of normality, but a new normality, one where Lorentz invariance is broken. This would mean that there is a universal reference class that trumps but doesn’t destroy relativity. If anything, a universal reference class seems like a stronger causal structure than relativity.
This whole thing would be so normal, that there’s a pre-existing effective field theory called the Standard Model Extension. http://en.wikipedia.org/wiki/Standard-Model_Extension
http://en.wikipedia.org/wiki/Lorentz_transformation
http://en.wikipedia.org/wiki/Lorentz_covariance
http://en.wikipedia.org/wiki/Lorentz-violating_neutrino_oscillations
is suggested WIkipedia skimming, http://blogs.discovermagazine.com/cosmicvariance/2005/10/25/lorentz-invariance-and-you/ is what gave me the intuition of the universal inertial frame.
I’m at around 10% odds on this whole thing seeming like weak consensus in 3 years and something like >80% odds (on a very very long bet) that locally possible FTL information travel is possible outside of the local structure of causality.
It’s not about transmitting information into the past—it’s about the locality of causality. Consider Judea Pearl’s classic graph with SEASONS at the top, SEASONS affecting RAIN and SPRINKLER, and RAIN and SPRINKLER both affecting the WETness of the sidewalk, which can then become SLIPPERY. The fundamental idea and definition of “causality” is that once you know RAIN and SPRINKLER, you can evaluate the probability that the sidewalk is WET without knowing anything about SEASONS—the universe of causal ancestors of WET is entirely screened off by knowing the immediate parents of WET, namely RAIN and SPRINKLER.
Right now, we have a physics where (if you don’t believe in magical collapses) the amplitude at any point in quantum configuration space is causally determined by its immediate neighborhood of parental points, both spatially and in the quantum configuration space.
In other words, so long as I know the exact (quantum) state of the universe for 300 meters around a point, I can predict the exact (quantum) future of that point 1 microsecond into the future without knowing anything whatsoever about the rest of the universe. If I know the exact state for 3 meters around, I can predict the future of that point one nanosecond later. And so on to the continuous limit: the causal factors determining a point’s infinitesimal future are screened off by knowing an infinitesimal spatial neighborhood of its ancestors.
This is the obvious analogue of Judea Pearl’s Causality for continuous time; instead of discrete causal graphs, you have a continuous metric of relatedness (space) which shrinks to an infinitesimal neighborhood as you consider infinitesimal causal succession (time).
This, in turn, implies the existence of a fundamental constant describing how the neighborhood of causally related space shrinks as time diminishes, to preserve the locality of causal relatedness in a continuous physics.
This constant is, obviously, c.
I’ve never read this anywhere else, by the way. It clearly isn’t universally understood, because if all physicists understood the universe in these terms, none of them would believe in a “collapse of the wavefunction”, which is not locally related in the configuration space. I would be surprised neither to find that the above statement is original, nor that it has been said before.
I am attempting to bet that physics still looks like this after the dust settles. It’s a stronger condition than global noncircularity of time—not all models with globally noncircular time have local causality.
If violating Lorentz invariance means that physics no longer looks like this, then I will bet at 99-to-1 odds against violations of Lorentz invariance. But I can’t make out from the Wikipedia pages whether Lorentz violations mean the end of local causality (which I’ll bet against) or if they’re random weird physics (which I won’t bet against).
I am also willing to bet that the fundamental constant c as it appears in multiple physical equations is the constant of time/space locality, i.e., the constant we know as c is fundamentally the shrinking constant by which an infinitesimal neighborhood in space causally determines an infinitesimal future in time. I am willing to lose the bet if there’s still locality but the real size of the infinitesimal spatial neighborhood goes as 2c rather than c (though I’m not actually sure whether that statement is even meaningful in a Lorentz-invariant universe) and therefore you can use neutrinos to transmit information at up to twice the speed of light, but no faster. The clues saying that c is the fundamental constant that we should expect to see in any continuous analogue of a locally causal universe, are strong enough that I’ll bet on them at 99-to-1 odds.
What I can’t make out is whether Lorentz violation throws away locality; employs a more complicated definition of c which is different in some directions than others; makes the effect of the constant different on neutrinos and photons; or, well, what exactly.
I would happily amend the bet to be annulled in the case that any more complicated definition of c is adopted by which there is still a constant of time/space locality in causal propagation, but it makes photons and neutrinos move at different speeds.
The trouble is that physicists don’t read books like Causality and don’t understand local causality as part of the apparent character of physical law, which is why some of them still believe in the “collapse of the wavefunction”—it would be an exceptional physicist whom we could simply ask whether the Standard Model Extension preserves locally continuous causality with c as the neighborhood-size constant.
This is starting to remind me of Kant. Specifically is attempt to provide an a priori justification for the then known laws of physics. This made him look incredibly silly once relativity and quantum mechanics came along.
And Einstein was better at the same sort of philosophy and used it to predict new physical laws that he thought should have the right sort of style (though I’m not trying to do that, just read off the style of the existing model). But anyway, I’d pay $20,000 to find out I’m that wrong—what I want to eliminate is the possibility of paying $20,000 to find out I’m right.
You need to distinguish different notions of local causality. SR implies in most forms a very strong form of local causality that you seem to be using here. But it is important to note that very well behaved systems can not obey this, and it isn’t just weird systems. For example, a purely Newtonian universe won’t obey this sort of strong local causality. A particle from far away can have arbitrarily high velocity and smack into the region we care about. The fact that such well behaved systems are ok with weaker forms of local causality suggests that we shouldn’t assign such importance to local causality.
This isn’t a well-defined question. It depends very much on what sort of Lorentz violation you are talking about. Imagine that you are working in a Newtonian framework and someone asks “well, if gravity doesn’t always decrease at a 1/r^2 rate, will the three body problem still be hard?” The problem is that the set of systems which violate Lorentz is so large that saying this isn’t that helpful.
The vast majority of physicists aren’t thinking about how to do things that replace the fundamental laws with other fundamental more unifying laws. The everday work of physicists is stuff like trying to measure the rest mass of elementary particles more precisely, or being better able to predict the properties of pure water near a transition state, or trying to better model the behavior of high temperature superconductors. They don’t have reason to think about these issues. But even if they did, they probably wouldn’t take these sorts of ideas as seriously as you do. Among other problems, strong local causality is something which appeals to a set of intuitions. And humans are notoriously bad at intuiting how the universe behaves. We evolved to get mates and avoid tigers, not to be able to intuit the details of the causal structure of reality.
And just like that, Many-Worlds clicked for me. It’s now incredibly obvious just how preposterous waveform collapse is, and this new intuitive mental model clears up a lot of the frustrating sticking points I was having with QM. C as the speed limit of information in the universe and the notion of local causality have all been a native part of my view of the universe for a while, but it wasn’t until that sentence that I connected them to decoherence.
Edit: Wow, a lot more things just clicked, including quantum suicide. My priority of cyronics just shot up several orders of magnitude, and I’m going to sign up once I’ve graduated and start bringing in income.
Eliezer, if you have never seen The Prestige, I recommend you go and watch it. It provides a nice allegory for MW/quantum suicide that I think a lot of lay-people will be able to connect to easily. Could help when you’re explaining things.
Edit2: Just read your cyronics 101, and while the RIGHT NOW message punctured through my akrasia, I looked it up and even the $310/yr is not affordable right now. However, it’s far more affordable than I had thought and in a couple months I should be in a position where this becomes sustainably possible.
By the way, thank you. You probably know this on an intuitive level, but it should be good to hear that your work may very well be saving lives.
Username, you’re having a small conversion experience here, going from “causality is local” to “wavefunction collapse is preposterous” to “I understand quantum suicide” to “I’d better sign up for cryonics once I graduate” in rapid succession. It’s a shame we can’t freeze you right now, and then do a trace-and-debug of your recent thoughts, as a case study.
This was a somewhat muddled comment from Eliezer. Local causality does not imply an upper speed limit on how fast causal influences can propagate. Then he equivocates between locality within a configuration and locality within configuration space. Then he says that if only everyone in physics thought like this, they wouldn’t have wrong opinions about how QM works. I can only guess how you personally relate all that to decoherence. And from there, you get to increased confidence in cryonics. It could only happen on Less Wrong. :-)
ETA: Some more remarks:
Locality does not imply a maximum speed. Locality just means that causes don’t jump across space to their effects, they have to cross it point by point. But that says nothing about how fast they cross it. You could have a nonrelativistic local quantum mechanics with no upper speed limit. Eliezer is conflating locality with relativistic locality, which is what he is trying to derive from the assumption of locality. (I concede that no speed limit implies a de-facto or practical nonlocality, in that the whole universe would then be potentially relevant for what happens here in the “next moment”; some influence moving at a googol light-years per second might come crashing in upon us.)
Equivocating between locality in a configuration and locality in a configuration space: A configuration is, let’s say, an arrangement of particles in space. Locality in that context is defined by distance in space. But configuration space is a space in which the “points” themselves are whole configurations. “Locality” here refers to similarity between whole configurations. It means that the amplitude for a whole configuration is only immediately influenced by the amplitudes for infinitesimally different whole configurations.
Suppose we’re talking about a configuration in which there are two atoms, A and B, separated by a light-year. The amplitude for that configuration (in an evolving wavefunction) will be affected by the amplitude for a configuration which differs slightly at atom A, and also by the amplitude for a configuration which differs slightly at atom B, a light-year away from A. This is where the indirect nonlocality of QM comes from—if you think of QM in terms of amplitude flows in configuration space: you are attaching single amplitudes to extended objects—arbitrarily large configurations—and amplitude changes can come from very different “directions” in configuration space.
Eliezer also talks about amplitudes for subconfigurations. He wants to be able to say that what happens at a point only depends on its immediate environment. But if you want to talk like this, you have to retreat from talking about specific configurations, and instead talk about regions of space, and the quantum state of a “region of space”, which will associate an amplitude with every possible subconfiguration confined to that region.
This is an important consideration for MWI, evaluated from a relativistic perspective, because relativity implies that a “configuration” is not a fundamental element of reality. A configuration is based on a particular slicing of space-time into equal-time hypersurfaces, and in relativity, no such slicing is to be preferred as ontologically superior to all others. Ultimately that means that only space-time points, and the relations between them (spacelike, lightlike, timelike) are absolute; assembling sets of points into spacelike hypersurfaces is picking a particular reference frame.
This causes considerable problems if you want to reify quantum wavefunctions—treat them as reality, rather than as constructs akin to probability distributions—because (for any region of space bigger than a point) they are always based on a particular hypersurface, and therefore a particular notion of simultaneity; so to reify the wavefunction is to say that the reference frame in which it is defined is ontologically preferred. So then you could say, all right, we’ll just talk about wavefunctions based at a point. But building up an extended wavefunction from just local information is not a simple matter. The extended wavefunction will contain entanglement but the local information says nothing about entanglement. So the entanglement has to come from how you “combine” the wavefunctions based at points. Potentially, for any n points that are spacelike with respect to each other, there will need to be “entanglement information” on how to assemble them as part of a wavefunction for configurations.
I don’t know where that line of thought takes you. But in ordinary Copenhagen QM, applied to QFT, this just doesn’t even come up, because you treat space-time, and particular events in space-time, as the reality, and wavefunctions, superpositions, sums over histories, etc, as just a method of obtaining probabilities about reality. Copenhagen is unsatisfactory as an ontological picture because it glosses over the question of why QM works and of what happens in between one “definite event” and the next. But the attempt to go to the opposite interpretive pole, and say “OK, the wavefunction IS reality” is not a simple answer to your philosophical problems either; instead, it’s the beginning of a whole new set of problems, including, how do you reify wavefunctions without running foul of relativity?
Returning to Eliezer’s argument, which purports to derive the existence of a causal speed-limit from a postulate of “locality”: my critique is as informal and inexact as his argument, but perhaps I’ve at least shown that this is not as simple a matter as it may appear to the uninformed reader. There are formidable conceptual problems involved just in getting started with such an argument. Eliezer has the essentials needed to think about these topics rigorously, but he’s passing over crucial details, and he may thereby be overlooking a hole in his intuitions. In mathematics, you may start out with a reasonable belief that certain objects always behave in a certain way, but then when you examine specifics, you discover a class of cases which work in a way you didn’t anticipate.
What if you have a field theory with no speed limit, but in which significant and ultra-fast-moving influences are very rare; so that you have an effective “locality” (in Eliezer’s sense), with a long tail of very rare disruptions? Would Eliezer consider that a disproof of his intuitive idea, or an exception which didn’t sully the correctness of the individual insight? I have no idea. But I can say that the literature of physics is full of bogus derivations of special relativity, the Born rule, the three-dimensionality of space, etc. This derivation of “c” from Pearlian causal locality certainly has the ingredients necessary for such a bogus derivation. The way to make it non-bogus is to make it deductively valid, rather than just intuitive. This means that you have to identify and spell out all the assumptions required for the deduction.
This may or may not be the result of day 2 of modafinil. :) I don’t think it is, because I already had most of the pieces in place, it just took that sentence to make everything fit together. But that is a data point.
Hm, a trace-debug. My thought process over the five minutes that this took place was manipulation of mental imagery of my models of the universe. I’m not going to be able to explain much clearer than that, unfortunately. It was all very intuitive and not at all rigorous, the closest representation I can think of is Feynman’s thinking about balls. I’m going to have to do a lot more reading as my QM is very shakey, and I want to shore this up. It will also probably take a while until this way of thinking becomes the natural way I see the universe. But it all lines up, makes sense, and aligns with what people smarter than me are saying, so I’m assigning a high probability that it’s the correct conclusion.
An upper speed limit doesn’t matter—all that matters is that things are not instantaneous for locality to be valid.
A conversion experience is a very appropriate term for what I’m going through. I’m having very mixed emotions right now. A lot of my thoughts just clarified, which simply feels good. I’m grateful, because I live in an era where this is possible and because I was born intelligent enough to understand. Sad, because I know that most if not all of the people I know will never understand, and never sign up for cyronics. But I’m also ecstatic, because I’ve just discovered the cheat code to the universe, and it works.
I just made a long-winded addition to my comment, expanding on some of the gaps in Eliezer’s reasoning.
Well, you’re certainly not backing down and saying, hang on, is this just an illusory high? It almost seems inappropriate to dump cold water on you precisely when you’re having your satori—though it’s interesting from an experimental perspective. I’ve never had the opportunity to meddle with someone who thinks they are receiving enlightenment, right at the moment when it’s happening; unless I count myself.
From my perspective, QM is far more likely to be derived from ’t Hooft’s holographic determinism, and the idea of personal identity as a fungible pattern is just (in historical terms) a fad resulting from the incomplete state of our science, so I certainly regard your excitement as based mostly on an illusion. It’s good that you’re having exciting ideas and new thoughts, and perhaps it’s even appropriate to will yourself to believe them, because that’s a way of testing them against the whole of the rest of your experience.
But I still find it interesting how it is that people come to think that they know something new, when they don’t actually know it. How much does the thrill of finally knowing the truth provide an incentive to believe that the ideas currently before you are indeed the truth, rather than just an interesting possibility?
From experiences back when I was young and religious, I’ve learned to recognize moments of satori as not much more than a high (have probably had 2-3 prior). I enjoy the experience, but I’ve learned skepticism and try not to place too much weight on them. I was more describing the causes for my emotional states rather than proclaiming new beliefs. But to be completely honest, for several minutes I was convinced that I had found the tree of life, so I won’t completely downplay what I wrote.
I suspect it has evopsych roots relating to confidence, the measured benefits of a life with purpose, and good-enough knowledge.
Reading ‘t Hooft’s paper I could understand what he was saying, but I’m realizing that the physics is out of my current depth. And I understand the argument you explained about the flaws in spatial (as opposed to configuration) locality. I’ll update my statement that ‘Many-Worlds is intuitively correct’ to ‘Copenhagen is intuitively wrong,’ which I suppose is where my original logic should have taken me—I just didn’t consider strong MWI alternatives. Determinism kills quantum suicide, so I’ll have to move down the priority of cyronics (though the ‘if MWI then quantum suicide then cyronics’ logic still holds and I still think cyronics is a good idea. I do love me a good hedge bet). But like I said, I’m not at all qualified to start assigning likelyhoods here between different QM origins. This requires more study.
I don’t see the issue with consciousness as being represented by the pattern of our brains rather than the physicality of it. You are right that we may eventually find that we can never look at a brain with high enough resolution to emulate it. But based on cases of people entering a several-hour freeze before being revived, the consciousness mechanism is obviously robust and I say this points towards it being an engineering problem of getting everything correct enough. The viability of putting it on a computer once you have a high enough resolution scan is not an issue—worst case scenario you start from something like QM and work up. Again this assumes a level of the brain’s robustness (rounding errors shouldn’t crash the mind), but I would call that experimentally proven in today’s humans.
Note also that some of the recent papers do explicitly discuss causality issues. See e.g. this one.
Hmm, would you be willing to bet on either the 10% claim or the 80% claim?
Everything you have said until the last paragraph seems reasonable to me, and then those extremely high probabilities jump out.
Not necessarily, there could be a distinguished frame of reference.
That might preserve before-and-after. It wouldn’t preserve the locality of causality. Once you throw away c, you might need to take the entire frame of the universe into account when calculating the temporal successor at any given point, rather than just the immediate spatial neighborhood.
There could be some other special velocity than c. Like, imagine there’s some special reference frame in which you can send superluminal signals at exactly 2.71828 c in any direction. In other reference frames, this special velocity depends on which direction you send the signal. Lorentz invariance is broken. But the only implication for local causality is that you need to make your bubble 2.71828 times bigger.
People in this thread with physics backgrounds should say so so that I can update in your direction.
When I looked at the paper, my impression is that it was a persistent result in the experiment, which would explain publication: the experiment’s results will be public and someone, eventually, will notice this in the data. Better that CERN officially notice this in the data than Random High Energy Physicist. People relying on CERN’s move to publish may want to update to account for this fact.
This is a really good point.
Forgive me for being a bit slow, but I honestly don’t understand what you mean. I don’t know why their publishing the results needs explanation; they already said it was because they couldn’t find an error and are hoping that someone else will find one if it’s there. Is your point that the fact that CERN published this rather than someone else is to be taken as evidence of its veracity? Or do you mean something else?
Lets say you’re a physicist maximizing utility. It’s pretty embarrassing to publish results with mistakes in them and the more important the results the more embarrassing it would be to announce results later shown to be the product of some kind of incompetence. So one can usually expect published results of serious import to have been checked over and over for errors.
But the calculus changes when we introduce the incentive of discovering something before anyone else. This is particularly the case when the discovery is likely to lead to a Nobel prize. In this case a physicist might be less diligent about checking the work in order to make sure she is the first out with the new results.
Now in this case CERN-OPERA is pretty much the only game in town. No one else can measure this many neutrinos with this kind of accuracy. So it would seem like they could take all the time they needed to check all the possible sources of error. But if Hyena is right that OPERA’s data is/was shortly going to be public then they risk someone outside CERN-OPERA noticing the deviation from expected delay and publishing the results. By itself that is pretty embarrassing and it introduces some controversy regarding who deserves credit for the discovery.
Now after watching the presentation I get the sense that they really did check everything they could think of and it sounds like they took about six months to prepare analysis. It also sounds like all the relevant calibration issues are just too tricky and complex for anyone outside the CERN-OPERA to be the first to publish without risking embarrassment. Nor do I know for sure what kind of access to the results was available to outside physicists. So I think the alleged effect was at most minimal. But to update on publication requires a good model of the incentives the physicists faced.
Neutrinos not neutrons (very different particles. Neutrons are much better understood and easier to work with.)
There’s work in the US at Fermilab which could reasonably measure things at this level of accuracy. I don’t know much about the Japanese work by stuff related to SK might be able to do similar things. Other than those issues your analysis seems accurate. None of these points detract from the general thrust of your argument.
Edited- Neutrinos, obviously. Brain fart.
I think Fermilab-MINOS can measure such things but I believe I read they have to update and recalibrate a bunch of things to get more accuracy, first. (Recall MINOS already saw this same effect but not at a statistically significant level. Obviously, they now have an incentive to improve their accuracy.)
I think Hyena means they had a reason to publish other than believing the result is correct.
Correct.
My point is that CERN’s publication of the anomaly is implied by its existence and an assumption that CERN minimally competent to run a high-level research project. Therefore, the publication itself gives us no information we did not already have. (The paper itself doesn’t even really give us anything important by noting the anomaly, either, since our beliefs are about the implications of the anomaly, so its existence in itself can’t be part of the calculation.)
Ah. Thank you for clarifying!
P= .95 the reporting will be much sparser when the results are overturned.
Relevant: The Beauty of Settled Science
I’m waiting for another experiment before I get too worked up about this result.
That MINOS saw something like this before is pretty interesting. Other thing to consider is SN1987A—at the rate the CERN neutrinos were traveling we should have detected neutrinos of SN1987A four years before it was visible.
The fact that this was made public like this suggests they are very confident they haven’t made any obvious errors.
This paper discusses the possibility of neutrino time travel.
There is a press conference at 10 AM EST.
I’ll say 0.9 non-trivial experimental set-up error (no new physics but nothing silly either). 0.005 something incompetent or fraudulent. Remainder is new physics “something I don’t know about, “neutrinos sometimes travel backwards in time” and “special relativity is wrong” 8000:800:1.
Does that work? Once you beat light don’t you just win the speed race? The in-principle upper bound on what can be influenced just disappears. The rest is just engineering. Trivial little details of how to manufacture a device that emits a finely controlled output of neutrinos purely by shooting other neutrinos at something.
I think so; with any noticeable faster than C, can’t you just ping-pong between paired receiver/emitters, gaining a little distance into the past with each ping-pong? (If you’re only gaining a few nanoseconds with each cycle, might be expensive in equipment or energy, but for the right information from days/weeks in the future—like natural disaster warnings—it’d worth it, even ignoring hypercomputation issues.)
“Yeah, I only go a little into the past each time, but I make it up in volume!”
What’s that a quote from? I’d just Google, but you changed a word or two, I think.
I just made it up, trying to be silly. It’s just an application of the standard “low margin, make it up on volume”. It barely even makes sense as a joke, since the idea is actually sound (or at least not unsound on its face). If you can go any amount into the past, then you could, it seems, stack the process so that you go as far as you want into the past.
I doubt Silas was thinking of this, but it reminded me of SNL’s “First Citiwide Change Bank” commercial.
That’s what she said.
Go Mr. Parker!
I’ll be honest, reading that link, that show sounds terrible.
I like it. They used difficult and expensive time travel to undo major catastrophes.
Well, I’d say there’s a significant chance you’d end up with a boom instead, for invoking the (quantum) chronology protection conjecture.
That wouldn’t necessarily stop you in all cases, though. It just means you need quantum computer-level isolation, or a signal that doesn’t include any actual closed timelike curves—that is, you could hypothetically send a signal from 2011 Earth to 2006 Alpha Centauri so long as the response takes five years to get back.
Hmm, I don’t think most variants of chronology protection imply inherently destructive results. But your remark made me feel all of a sudden very worried that if is real this could be connected to the Great Filter. I’m almost certainly assigning this more emotional weight than the very tiny probability that is at all justified.
I don’t know about you but the emotion I associate with the possibility is fascination, curiosity and some feeling that we need a word for along the lines of entertainment-satisfaction. It’s just so far out into far mode that it doesn’t associate with visceral fear. And given the low probability it is one instance of disconnection of emotion to knowledge of threat that doesn’t seem like a problem! :)
Don’t worry, I’m pretty sure it’d be a tiny boom. ;)
No free energy, after all.
How does this relate to free energy?
If there was an explosion big enough to cause worldwide destruction, where would the energy come from?
What, as in “You fools, you’ve doomed us all!”?
Hey, I’m not the one who broke physics. Take it up with CERN! ;)
“Recent CERN reports of faster than light neutrinos will be found to be mistaken within 3 months”, PB.com.
The problem that most of those people are probably guessing as to when it will be found to be mistaken.
Any finding that it is mistaken will have a ‘when’ attached, I think...
My grandfather is doomed, doomed I say!
Mwahahaha!
And what, if I may ask, are your plans for your grandmother?
It’s gonna be Lazarus Long all over again -_-;
Aha! I knew wedrifid was my worst enemy!
I strongly suspect that this is due to human error (say 95%). A few people in this thread are batting around much higher probability but given that this isn’t a bunch of crackpots but are researchers at CERN this seems like overconfidence. (1-10^-8 is really, really confident.) The strongest evidence that this is an error is that it isn’t being produced at much faster than the speed of light but only a tiny bit over.
I’m going to now proceed to list some of the 5%. I don’t know enough to discuss their likelyhood in detail.
1) Neutrinos oscillating into a tachyonic form. This seems extremely unlikely. I’m not completely sure, but I think this would violate CPT among other things.
2) Neutrinos oscillating into a sterile neutrino that is able to travel along another dimension. We can approximately bound the number of neutrino types by around 6 (this extends from the SN 1987A data and solar neutrino data).
Both 1 and 2 require extremely weird situations where neutrinos have a probability of oscillating into a specific form with an extremely low probability but have a high probability of oscillating away from it. (If the probability to go to this form were high we would have seen it in the solar neutrino deficiency.) These both have the nice part of potentially explaining dark matter also.
3) Photons have mass, and we need to distinguish between the speed of light and c in SR. The actual value of c in SR is slightly higher than what photons generally travel at, so high energy very low mass particles can travel faster than the speed of light but not faster than c. This runs into a lot of problems, such as the fact that a lot of SR can be derived from Maxwell’s equations and some reasonable assumptions about conservation, symmetry and reference frames. So the speed of light should be the actual value showing up in SR.
One other thing to note that hasn’t gotten a lot of press- if neutrinos regularly do this we should have seen the SN 1987A neutrinos years before the light arrived, rather than just a few hours before. This is evidence against. But this is only weak evidence since the early neutrino detectors were weak enough that this sort of thing could have been conceivably missed. Moreover, the Mont Blanc detector did detect a burst of neutrinos a few hours before SN 1987A before the main burst. This is generally considered to be a statistical fluke. But, nother detectors could potentially have been neutrinos traveling faster than the speed of light. Problem with this: Why would none of the other detectors have also gotten that early burst? Second problem: If this were the case the early SN 1987A neutrinos might be still traveling faster than light but it would be much much slower than the claim here. This claim amounts to neutrinos traveling on the order of 1⁄10,000 to 1⁄40,000 of c faster than they should. The Mont Blanc thing would require them traveling faster on the order of a (10^-9)c faster than they should.
The main problem with 3) is that if photons have mass, then we would observe differences in speed of light depending on energy at least as big as the difference measured now for neutrinos. This seems not to be the case and c is measured with very high accuracy. If photons traveled with some velocity lower than c, but constant independent of energy, that would violate special relativity.
Yes, but we almost always measure c precisely using light near the visible spectrum. Rough estimates were first made based on the behavior of Jupiter and Saturn’s moons (their eclipses occurred slightly too soon when the planets were near Earth and slightly too late when they were far from Earth).
Variants of a Foucault apparatus are still used and that’s almost completely with visible light or near visible light.
One can also use microwaves to do clever stuff with cavity resonance. I’m not sure if there would be a noticeable energy difference.
The ideal thing would be to measure the speed of light for higher energy forms of light, like x-rays and gamma rays. But I’m not aware of any experiments that do that.
The experimental upper bound on photon mass is 10^-18 eV. The photons near visible spectrum have about 10^-3 eV, which means their relative deviation from c is of order 10^-30. Gamma would be even closer. I don’t think mass of photon is measurable via speed of light.
Err… build a broad spectrum telescope and look at an unstable stellar entity?
That’s an interesting idea. But the method one detects gamma rays or x-rays is very different than what one uses to detect light, so calibrating would be tough. And most unstable events take place over time, so this would be really tough. Look at for example a supernova- even the neutrino burst lasts on the order of tens of seconds. Telling whether the gamma rays arrived at just the right time or not would seem to be really tough. I’m not sure, would need to crunch the numbers. It certainly is an interesting idea.
Hmm, what about actively racing them? Same method as yours but closer in. Set off a fusion bomb (which we understand really well) far away (say around 30 or 40 AU out). That will be on the order of a few light hours which might be enough to see a difference if one knew then that everything had to start at the exact same time.
Short answer: The numbers come out in the ballpark of hours not seconds.
Being closer in relies on trusting your engineering competence to be able to calibrate your devices well. Do it based off interstellar events and you just need to go “Ok, this telescope went bleep at least a few minutes before that one” then start scribbling down math. I never trust my engineering over my physics.
Photons having mass would screw up the Standard Model too… right?
Not necessarily. (Disclaimer: Physics background but this is not my area of expertise; I am working from memory of courses I took >5 years ago). In electroweak unification, there are four underlying gauge fields, superpositions of which make up the photon, W bosons, and Z boson. You have to adjust the coefficients of the combinations very carefully to make the photon massless and the weak bosons heavy. You could adjust them slightly less carefully and have an extremely light, but not massless, photon, without touching the underlying gauge fields; then you can derive Maxwell and whatnot using the gauge fields instead of the physical particles, and presumably save SR as well.
Observe that the current experimental upper limit on the photon mass (well, I say current—I mean, the first result that comes up in Google; it’s from 2003, but not many people bother with experimental bounds on this sort of thing) is 7x10^{-19} eV, or what we call in teknikal fiziks jargon “ridiculously tiny”.
SR doesn’t depend on behaviour of gauge fields. Special relativity is necessary to have a meaningful definition of “particle” in field theory. The gauge fields have to have zero mass term because of gauge invariance, not Lorentz covariance. The mass is generated by interaction with Higgs particle, this is essentially a trick which lets you forget gauge invariance after the model is postulated. It doesn’t impose any requirements on SR either.
I was thinking of how Lorentz invariance was historically arrived at: From Maxwell’s equations. If the photon has mass, then presumably Maxwell does not exactly describe its behaviour (although with the current upper bound it will be a very good approximation); but the underlying massless gauge field may still follow Maxwell.
First we may clarify what is exactly meant by “following Maxwell”. For example in electrodynamics (weak interaction switched off) there is interaction between electron field and photons. Is this Maxwell? Classical Maxwell equations include the interaction of electromagnetic field and current and charge densities, but they don’t include equation of motion for the charges. Nevertheless, we can say that in quantum electrodynamics
photon obeys Maxwell, because the electrodynamics Lagrangian is identical to the classical Lagrangian which produces Maxwell equations (plus equations of motion for the charges)
photon doesn’t obey Maxwell, because due to quantum corrections there is an extremely weak photon self-interaction, which is absent in classical Maxwell.
See that the problem has nothing to do with masses (photons remain massless in QED), Glashow-Weinberg-Salam construction of electroweak gauge theory or Higgs boson. The apparent Maxwell violation (here, scattering of colliding light beams) arise because on quantum level one can’t prevent the electron part of the Lagrangian from influencing the outcome even if there are no electrons in the initial and final state. Whether or not is this viewed as Maxwell violation is rather choice of words. The electromagnetic field still obeys equations which are free Maxwell + interaction with non-photon fields, but there are effects which we don’t see in the classical case. Also, those violations of Maxwell are perfectly compatible with Lorentz covariance.
In the case of vector boson mass generation, one may again formulate it in two different ways:
the vector boson follows Maxwell, since it obeys equations which are free Maxwell + interaction with Higgs
it doesn’t follow Maxwell, because the interaction with Higgs manifests itself as effective mass
Again this is mere choice of words.
Now you mentioned the linear combinations of non-physical gauge fields which give rise to physical photon and weak interaction bosons. The way you put it it seems that the underlying fields, which correspond to U(1) and SU(2) gauge group generators, are massless and the mass arises somehow in the process of combining them together. This is not the case. The underlying fields all interact with Higgs and therefore are all massive. Even if the current neutrino affair lead to slight revision of photon masslessness, the underlying fields would be “effectively massive” by interaction with Higgs (I put “effectively massive” in quotes because it’s pretty weird to speak about effective properties of fields which are not measurable).
Of course, your overall point is true—there is no fundamental reason why photon couldn’t obtain a tiny mass by the Higgs mechanism. Photon masslessness isn’t a theoretical prediction of the SM.
Ok, I sit corrected. This is what happens when an experimentalist tries to remember his theory courses. :)
Ok. I think there’s one thing that should be stated explicitly in this thread that may not have been getting enough attention (and which in my own comments I probably should have been more explicit.)
The options are not “CERN screwed up” and “neutrinos can move faster than c.” I’m not sure about the actual probabilities but P(neutrinos can move faster than c|CERN didn’t screw up) is probably a lot less than P(Weird new physics that doesn’t require faster than light particles|CERN didn’t screw up).
I did say “Error caused by new physical effect. P = 0.15” right in the first comment in this thread. It’s just that we don’t know enough about the design of the experiment to say much about it. Do you know how the neutrinos were generated/detected?
The neutrino generation is somewhat indirect. Protons are accelerated into graphite, and then the resulting particles are accelerated further in the correct direction so that they decay into muons and muon neutrinos. The muons are quickly lost (muons don’t like to interact with much but a few kilometers of solid rock will block most of them). The detector itself is setup to detect specifically the neutrinos which have oscillated into tau neutrinos.
The detector itself is a series of lead plates with interwoven layers of light-sensitive material which has then scintillator counters to detect events in the light sensitive stuff. I don’t fully understand the details for the detector. (In particular I don’t know how they are differentiating tau neutrinos hitting the lead plates from muon neutrinos or electron neutrinos) but I naively presume that there’s some set of characteristic reactions which occur for the tau neutrinos and not the other two. Since this discrepancy is for neutrinos in general, and they seem to be picking up data for all the neutrinos (I think?) that should’t be too much of an issue.
I’ve heard so far only a single hypothesis of new physics without faster than light travel involving suppression of virtual particles and I don’t have anywhere near the expertise to guess if that sort of thing is at all plausible.
There is a conserved quantity* for elementary particles that is called “lepton number.” It is defined such that leptons (electrons, muons, taus, and their respective neutrinos) have lepton number +1, and anti-leptons (positrons, antimuons, antitaus, and antineutrinos) have lepton number −1. Further, the presence of each flavor (electron, muon, tau) is conserved between the particles and the corresponding neutrinos.
For example, take the classic beta decay. A neutron decays to a proton, an electron, and an electron antineutrino. The neutron is not a lepton, so lepton number must be conserved at zero. The electron has lepton number +1 and the electron antineutrino has lepton number −1, totaling zero, and the “electron” flavor is conserved between the two of them.
Now, think about an inverse beta decay: an electron antineutrino combines with a proton to form a neutron and a positron. The electron antineutrino has lepton number −1, and so does the positron that is created; again, the “electron” flavor is conserved.
How does this apply to tau neutrinos? Reactions similar to an inverse beta decay occur when the other flavors of neutrinos interact with particles in the detector, but their flavors must be conserved, too. So, when a tau neutrino interacts, it produces a tau particle. A tau can be distinguished from an electron or muon in the detector by its mass and how it decays.
*This conservation is actually violated by neutrino oscillations, but it still holds in most other interactions.
Ok. That was basically what I thought was happening. Thanks for clarifying.
My probability distribution of explanations:
Neutrinos move faster than light in vacuum: P = 0.001
Error in distance measurement P = 0.01
Error in time measurement P = 0.4
Error in calculation P = 0.1
Error in identification of incoming neutrinos P = 0.1
Statistical fluke P = 0.1
Outright fraud, data manipulation P = 0.05
Other explanation 0.239
Having read the preprint, about the only observation is that I think you’re overestimating the fraud hypothesis.
There’s almost a whole page of authors, the preprint describes only the measurement, and finishes with something like (paraphrasing) “we’re pretty sure of seeing the effect, but given the consequences of this being new physics we think more checking is needed, and since we’re stumped trying to find other sources of error, we publish this to give others a try too; we deliberately don’t discuss any possible theoretical implications.”
At the very least, this is the work of the aggregate group trying very hard to “do it right”; I guess there could still be one rogue data manipulator, but I would give much less than 1 in 20 that nobody else in the group noticed anything funny.
Your statistical fluke estimate is too high, experiment was repeated like 16,000 times.
They 1) have measured 16,000 neutrinos and found each one above c, or 2) they run the experiment 16,000 times, each run consisting of many measurements, and found that each run produced the result, or 3) they measured 16,000 neutrinos, analysed the data once and found that on average the velocity is higher than c, with 6σ significance?
Yeah, it’s more complicated than all of those but (3) is the closest.
That doesn’t exhaust all possibilities, though it seems to have been 3).
Bear in mind that many parapsychological experiments have been repeated vastly more than that. My impression is that anyone who wants to argue that this is extremely unlikely to be a statistical fluke is going to have a much harder time viewing parapsychology as the control group for science.
The comparison to parapsychology is a really poor one in this case—for what should be pretty obvious reasons. For example, we know there is no file drawer effect. What we know about neutrino speed so far comes from a)Supernova measurements which contradict these results but measured much lower energy neutrinos and b)direct measurements that didn’t have the sample size or the timing accuracy to reveal the anomaly OPERA discovered.
But more importantly this was a six sigma deviation from theoretical prediction. As far as I know, that is unheard of in parapsychology.
We cannot treat physics the way we treat psychology.
Well, whatever this might say about me, the reasons aren’t obvious to me.
Right, but as I understand it, you don’t need a file drawer effect to see that some of the experiments done in parapsychology still have devastatingly tiny p-values on their own, such as through the Stanford Research Institute.. So the file drawer effect isn’t really the right way to challenge the analogy.
I actually don’t know what that means. Is sigma being used to indicate standard deviation? If so, then yes, there have been a number of parapsychology experiments that went in that range of accuracy—some moreso if I recall correctly. (It has been many years since I read into that stuff, so I could be misremembering.)
My point is actually more about statistics than science, so any system that uses frequentist statistics to extract truth is going to suffer from this kind of comparison. As I understand it, the statistical methods that are used to verify measurements like this FTL neutrino phenomenon are the same kinds of techniques used to demonstrate that people can psychokinetically affect random-number generators. So either parapsychology is ridiculous because it uses bad statistical methods (in which case there’s a significant chance that this FTL finding is a statistical error), or we can trust the statistical methods that CERN used (which seems to force us to trust the statistical methods that parapsychologists use.)
(Disclaimer: I’m not trying to argue anything about parapsychology here. I’m only attempting to point out that, best as I can tell, the argument for parapsychology as the control group for science seems to suggest that the CERN results stand a fair chance of being bad statistics in action. If A implies B and we’re asserting probably-not-B, then we have to accept probably-not-A.)
How is that?
You need to provide links because I read a fair bit on the subject and don’t recall this. If I came across such results my money would be on fraud of systematic error- not a statistical fluke.
This is the kind of “outside-view-taken to the extreme” attitude that just doesn’t make sense. We know why the statistical results of para-psychological studies tend to not be trustworthy- publication bias, file drawer effect, exploratory research turned into hypothesis testing retroactively etc. If we didn’t know why such statistical results couldn’t be trusted the we would be compelled to seriously consider para-psychological claims. My claim is that those reasons don’t apply to neutrino velocity measurements.
That’s a fair request. I don’t really have the time to go digging for those details, though. If you feel so inspired, again I’d point to the work done at the Stanford Research Institute (or at least I think it was that) where they did a ridiculous number of trials of all kinds and did get several standard deviations away from the expected mean predicted based on the null hypothesis. I honestly don’t remember the numbers at all, so you could be right that there has never been anything like a six-s.d. deviation in parapsychological experiments. I seem to recall that they got somewhere around ten—but it has been something like six years since I read anything on this topic.
That said, I get the feeling there’s a bit of goalpost-moving going on in this discussion. In Eliezer’s original reference to parapsychology as the control group for science, his point was that there are some amazingly subjective effects that come into play with frequentist statistics that could account for even the good (by frequentist standards) positive-result studies from parapsychology. I agree, there’s a lot of problem with things like publication bias and the like, and that does offer an explanation for a decent chunk of parapsychology’s material. But to quote Eliezer:
I haven’t looked at the CERN group’s methods in enough detail to know if they’re making the same kind of error. I’m just trying to point out that we can’t assign an abysmally low probability to their making a common kind of statistical error that finds a small-but-low-p-value effect without simultaneously assigning a lower probability to parapsychologists making this same mistake than Eliezer seems to.
And to be clear, I am not saying “Either the CERN group made statistical errors or telepathy exists.” Nor am I trying to defend parapsychology. I’m simply pointing out that we have to be even-handed in our dismissal of low-p-value thinking.
That doesn’t actually strike me as all that much extra improbability. A whole bunch of the mechanisms would allow both!
Can it be used to send messages?
Yes.
Relevant updates:
John Costella has a fairly simple statistical analysis which strongly suggests that the the OPERA data is statistically significant (pdf). This of course doesn’t rule out systematic problems with the experiment which still seem to be the most likely.
Costella has also proposed possible explanations of the data. See 1 and 2. These proposals focus on the idea of a short-lived tachyon. This sort of explanation helps explain the SN 1987a data. Costella points out that if the muon-neutrino pair is becoming tachyonic through the initial hadron barrier at the end of the accelerator that this would explain the data very well. The barrier has a distance of 18.2 meters which is very close to the claimed discrepancy. Costella proposes that they become tachyonic due to natural behavior of the Higgs field and I don’t have anywhere near the expertise to evaluate how reasonable that is, although he points out potential empirical problems with this hypothesis. Note that this hypothesis seems to be one of the easiest of the new physics hypotheses to test since one just makes the barrier longer and see if the neutrinos arrive sooner.
Overall, interesting and surprisingly plausible, but I’m still betting on some form of error.
More relevant papers:
“Neutrinos Must Be Tachyons” (1997)
Abstract: The negative mass squared problem of the recent neutrino experiments from the five major institutions prompts us to speculate that, after all, neutrinos may be tachyons. There are number of reasons to believe that this could be the case. Stationary neutrinos have not been detected. There is no evidence of right handed neutrinos which are most likely to be observed if neutrinos can be stationary. They have the unusual property of the mass oscillation between flavors which has not been observed in the electron families. While Standard Model predicts the mass of neutrinos to be zero, the observed spectrum of Tritium decay experiments hasn’t conclusively proved that the mass of neutrino is exactly zero. Based upon these observations and other related phenomena, we wish to argue that there are too many inconsistencies to fit neutrinos into the category of ordinary inside light cone particles and that the simplest possible way to resolve the mystery of the neutrino is to change our point of view and determine that neutrinos are actually tachyons.
This guy seems like someone a competent science journalist would be interviewing. I can’t say I understand much of it, unfortunately.
Tachyonic neutrinos can explain SN 1987A neutrinos beating photons to Earth, and tachyonic neutrinos can explain the CERN observations, but, critically, they cannot explain both phenomena simultaneously. The SN 1987A neutrinos apparently moved slower than the CERN neutrinos, when the pure tachyonic explanation would have them move faster than the CERN neutrinos.
This isn’t to say neutrinos couldn’t be tachyons, but it would still leave the CERN data requiring an explanation.
Your point is correct. But I’d also like to note that in case anyone thinks that SN 1987A is a problem for physics- the conventional model explains SN 1987A neutrinos beating the photos to Earth. Neutrinos are produced in the core of a star when it goes supernova. Light has to slowly works its way out from the core going through all the matter, or is produced at the very upper stages of the star. Neutrinos don’t interact with much matter so they get to go through quickly and so get a few hours head start. Since they are traveling very close to the speed of light they can arrive before the light.
This is the conventional explanation. If neutrinos routinely traveled faster than light, we’d expect the SN 1987A neutrinos to have arrived even earlier than the three hours they arrived before the light. In particular, if they traveled as fast as CERN predicted then they should have arrived about 3-5 year before the photons. Now, we didn’t have good neutrino detectors much before 1987 so it is possible that there was a burst we missed in that time range. But if so, why was there a separate pack of much slower neutrinos that arrived when we expected?
There may be possible explanations for this that fit both data . It is remotely possible for example that the tauon and muon neutrinos are tachyons but the electron neutrino is not, or that all but the electron neutrino are tachyons. If one then monkeyed with the oscillation parameters it might be possible to get that the CERN sort of beam would arrive fast but the beam from SN 1987A would arrive at the right time. I haven’t worked the numbers out, but my understanding is that we have not awful estimates for the oscillation behavior which should prevent this kiudge from working. It might work if one had another type of neutrino since that would give you six more parameters to play with. Other experiments can upper bound the number of neutrino types with a high probability, and the standard estimates say that there probably aren’t more than 6 neutrino types. So there is room here.
I don’t know enough about the underlying physics to evaluate how plausible this sort of thing is. Right now it seems that a lot of people are brainstorming different ideas.
Is this just assuming that they travel at the same speed as recorded for the CERN ones, or has any adjustment been made for their differing energies?
This is from a naive, back of the envelope calculation without taking differing energies into account. One thing to note that by some estimates tachyons should slow down as they get more energy. If that’s the case then the discrepancy may make sense since the neutrinos from the supernova should be I think higher energy.
Nope. As I said here the ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV.
Ok. In that case this hypothesis seriously fails.
I hadn’t realised is that neutrinos have never been observed going slower than light. If they had been observed going slower than light, then finding them also going faster would be absurd, since it would require infinite energy. But if they are always tachyons then them travelling faster than c is much less problematic.
However I don’t see how this explains the neutrinos from the supernova. In the paper it says that higher energies correspond to lower speeds (due to imaginary mass). The ones at CERN are 17GeV, whereas the ones from the supernova were 6.7MeV. But the difference in time for the supernova was proportionately smaller than that for the CERN neutrinos.
Perhaps CERN’s experiment was in error.
So, even if neutrinos really do go faster than light, CERN messed up.
The neutrinos are not going faster than light. P = 1-10^-8
Error caused by some novel physical effect: P = 0.15
Human error accounts for the effect (i.e. no new physics): P= 0.85
This isn’t even worth talking about unless you know a serious amount about the precise details of the experiment.
EDIT: Serious updating on the papers Jack links to downthread. I hadn’t realised that neutrinos have never been observed going slower than light. P = no clue whatsoever.
I’m stupid so I shouldn’t talk about physics? That’s absurd, Less Wrong is devoted to discussing exactly this kind of thing. Like… really? I’m really confused by your comment. Do you think the author of the Nature News piece should not have written for fear of causing people to think about a result?
This kind of comment you made is one of the most perniciously negative types of things you could say here. Please try not to stop discussion before it even starts.
Instead of shutting down discussion and saying it isn’t worth talking about, maybe you should try and expand on “Error caused by some novel physical effect”.
You’re not stupid, but we’re not (as far as I know) qualified to talk about this particular experiment. There’s no hope in hell that the particles are going faster than light, so the only interesting discussion is what else could be causing the effect. This would involve an in depth knowledge of particle physics, as well as the details of the experiment, how the speed was calculated, the type of detector being used, etc. I don’t work at CERN, and I don’t think many LessWrongers do either.
LessWrong is for discussing rationality not physics. Assigning probabilities to the outcomes stretched my rationalist muscles (I wasn’t sure about 10^-8. Too high? Too low?), but that’s the only relevance this post has (and yes, I did downvote it).
It would be fine to report the anomalous result, and give an interesting exploration of what faster than light particles would imply, making it clear that it’s horrendously unlikely. But presenting it as if the particles might actually be going faster than light is misleading.
I’ve heard that the detector works by having the neutrinos hit a block where they produce some secondary particles, the results are then inferred from these particles. If these particles are doing something novel, or if the neutrinos are producing an unexpected kind of particle, then this could lead to the errors observed.
EDIT: I’m being too harsh. LessWrongers with less knowledge of the relevant physics would be perfectly justified in assigning a much higher probability to FTL than I do, and they’ve got no particular reason to update on my belief. Similarly, I expect my probability assignment would change if I learnt more physics.
I believe I am more skeptical than the average educated person about press releases claiming some fundamental facet of physics is wrong. But I would happily bet $1 against $10,000,000 that they have, indeed, observed neutrinos going faster than the currently understood speed of light.
Taken! Paypal address?
I’d rather do it through an avenue other than Paypal, since I give odds near unity that if I won, Paypal would freeze my account before I could withdraw the $10 million. Also, considering that less than .01% of the world’s population has access to $10 million USD in a reasonably liquid form, there’s some counterparty risk.
But, IIRC, you’re confident you have the resources to produce a subplanetary mass of paperclips within a few decades, so let’s do it!
Oh, sorry, I was confused and thought you were offering the bet the other way around.
I apologize for being ambiguous; I should have been more clear that 10^-8 was way too low. Hopefully you weren’t counting on those resources for manufacturing paperclips.
Sadly I’m not in possession of even 10^8 cents, so I can’t make this bet.
If you have a bitcoin address, the smallest subdivision of a bitcoin against 1 bitcoin (historically, 1 bitcoin has been worth somewhere within $10 of $10) would do the tric.
From here.
Which part of my post is this addressed to? I don’t see any direct relevance.
Or the light is slightly subluminal and the nevtrinos are (almost) luminal at their speed.
May be a bunch of reasons, more probable than the assumed one.
What do you mean by that light is subluminal? Literally it means that light travels slower than light, which is probably not the intended meaning.
I suspect he means that light maybe travels slightly slower than the constant c used in relativity. Maybe photons actually have a really tiny rest-mass. Maybe our measurements of the speed of light are all in non-perfect vacuum which makes it slow down a little bit.
If they had tiny mass, we would observe variance in measured values of c, since less energetic photons would move slower. Measurements of c have relative precision of at least 10^-7 and no dependence on energy has been observed in the vacuum. Therefore the measured speed of light doesn’t differ from the relativistic c more than by 10^-7. The relative difference which is reported in the neutrinost seems to be 10^-5.
Kloth answerd as I would.
By the way, electrons in water can be faster than photons in water. No big surprise maybe, if this hapens with neutrinos and photons in a (near) vacuum.
Light can move more slowly while not in a vacuum, maybe this light was held up by something. That said, I don’t understand the paper well enough to tell if they are directly racing the neutrinos against some actual light, or if they’re just comparing it to an earlier mesurement.
I don’t know whether this guy knows what he’s talking about, but it sounds plausible:
Steven Sudit:
There have been no indications that one can transmit information FTL using the Casimir effect, the work he mentions was on quantum tunneling time, which is a different beast.
That doesn’t work. They didn’t race the neutrinos against a light beam. They measured the distance to the detector using sensitive GPS.
Are they THAT sensitive? Possibly not.
In order for this to be from an error in measurement you need to be a few meters off (18 meters if that’s the only problem). There are standard GPS techniques and surveying techniques which can be used to get very precise values. They state in the paper and elsewhere that they are confident to around 30 cm. Differential GPS can have accuracy down to about 10-15 cm, and careful averaging of standard GPS can get you in the range of 20 cm, so this isn’t at all implausible but it is still a definite potential source of error.
A more plausible issue is that since parts of the detectors are underground they didn’t actively use GPS for those parts. But even then, a multiple meter error seems unlikely, and 18 meters is a lot. It is possible that there’s a combination of errors all going in the same direction, say a meter error in the distance, a small error in the clock calibration, etc. And all of that add up even as each error remains small enough that it is difficult to detect. But they’ve been looking at things really closely so one would then think that at least one of the errors would turn up.
There’s now a theoretical paper up on the arxiv discussing a lot of these issues . The authors are respected physics people it seems. I have neither the time nor the expertise to evaluate it, but they seem to be claiming a resolution between the OPERA data and the SN 1987A data.
The best short form critique of this announcement I have seen is the post by theoretical physicist Matthew Buckley on the metafilter website:
Matt’s comment.
After I read that comment I clicked through to his personal website and I found a nifty layman’s explanation of the necessity for Dark Matter in current cosmo theoy:
Matt’s web essay on dark matter.
If you don’t have time to read his comment, what he says is that the results are not obviously bogus but they are so far-fetched that almost no physicists will find their daily work affected by the provisionally conceivable status of these results until a huge amount of double- triple- quadruple- and quintuple checking verifies them.
Obligatory xkcd reference http://xkcd.com/955/
p ( someone here cares aout this stuff but does not också check XCKD) = FAT BLOODY CHANGE I MEAN FAT BLOODY CHANCE
i should really fix the spelling above but its been a logn time since I was downvoted ISN”T THAT EXCITING
(it isn’t)
(i still will post this)
(doing it now)
I ask out of sheer curiosity, and you by no means need to answer if you don’t want to. But were you inebriated, sleep-deprived, or in another abnormal mental state when you posted this?
I, in fact, was. My apologies for the interruption.
I didn’t know you were Swedish! Your profile says you’re in Uppsala. Wanna meet in Stockholm sometime?
How about we make it into a proper Stockholm meetup?
Yup.
Done!
I’m going to go out on a limb here and say yes.
Sean Carroll has made a second blog post on the topic, to explain why faster-than-light neutrinos do not necessarily imply time travel.
And, just to reiterate the main point:
To quote one of my professors, from the AP release:
Also, Sean Carroll wrote a blog post which gives a good description of the physics and links to several other posts on the topic.
Forgive my ignorance, but… if distance is defined in terms of the time it takes light to traverse it, what’s the difference between “moving from A to B faster than the speed of light” and “moving from B to A”?
There’s three things you can do:
Move from A to B.
Move between A and B faster than the speed of light. (It’s uncertain which is the start and which is the end.)
Move from B to A.
For the basic physics answer, look at Minkowski space: you can define when two events shouldn’t be able to effect each other at all if nothing travels faster than light (i.e. they’re separated by a spacelike interval).
More basically, we know the direction of causality from other factors; so if the neutrinos are emitted at A and interact with something at B, and both events increase entropy, then you either have to say that they traveled faster than light or that they violated the Second Law of Thermodynamics.
You are correct: moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame, according to special relativity.
Second ‘faster’ should be ‘slower’, I think.
Shinoteki is right—moving slower than light is timelike, while moving faster than light is spacelike. No relativistic change of reference frame will interchange those.
What do you mean by “spacelike”?
IIRC, movement in spacetime is the same no matter which axis you designate as being time.
No. The metric treats time differently from space even as they are all on a single manifold. The Minkowski metric has three spacial dimensions with a +, and time gets a -. This is why space and time are different. Thinking of spacetime as R^4 is misleading because one doesn’t have the Euclidean metric on it.
It shouldn’t. Moving from B to A slower than light is possible*, moving from A to B faster than light isn’t, and you can’t change whether something is possible by changing reference frames.
*(Under special relativity without tachyons)
What I’m trying to get at is, What does a physicist mean when she says she saw X move from A to B faster than light? The measurement is made from a single point; say A. So the physicist is at A, sees X leave at time tX, sends a photon to B at time t0, and gets a photon back from B at time t1, which shows X at B at some time tB. I’m tempted to set tB = (t0+t1)/2, but I don’t think relativity lets me do that, except within a particular reference frame.
“X travelled faster than light” only means that tX < t1. The FTL interpretation is t0 < tX < tB < t1: The photon left at t0, then X left at tX, and both met at B at time tB, X travelling faster than light.
Is there a mundane interpretation under which tB < tX < t1? The photon left A at t0, met X at B at tB, causing X to travel back to A and arrive there at tX.
The answer appears to be No, because X would need to travel faster than light on the return trip. And this also explains that Owen’s original answer was correct: You can say that X travelled from A to B faster than light, or from B to A faster than light.
An interpretation putting t1<tX seems to have the photon moving faster than light backwards in time to get from B back to A
My question is whether he meant to say
moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A faster than the speed of light in another reference frame
or
moving from A to B faster than the speed of light in one reference frame is equivalent to moving from B to A slower than the speed of light in another reference frame
both of which involve moving faster than light.
I meant the first one: faster than light in both directions.
You can think of it this way: if any reference frame perceived travel from B to A slower than light, then so would every reference frame. The only way for two observers to disagree about whether the object is at A or B first, is for both to observe the motion as being faster than light.
I know Owen was not talking about impossibility, I brought up impossibility to show that what you thought Owen meant could not be true.
Moving from B to A slower than the speed of light does not involve moving faster than light.