I was thinking about this earlier today, having read/posted about the original findings and the first papers of “Here are all the ways that the original findings might be incorrect.”, and then this announcement this morning.
For simplicity, let’s assume someone’s initial probabilities are:
There is a 1% chance they were correct.
There is an 11% chance they made a Type A error.
There is an 11% chance they made a Type B error.
There is an 11% chance they made a Type C error.
There is an 11% chance they made a Type D error.
There is an 11% chance they made a Type E error.
There is an 11% chance they made a Type F error.
There is an 11% chance they made a Type G error.
There is an 11% chance they made a Type H error.
There is an 11% chance they made a Type I error.
This paper establishes that there is essentially a negligible chance they made a Type A error. But there are still 8 ways they could be wrong. If I’m doing the calculations correctly, this means they now have about a 1.12% chance of being correct from the perspective of that someone, because these findings also mean it is more likely that they made a Type B-I error as well.
This is a gross simplification of the problem. There are other certainly other possibilities than “Correct” and “9 equally likely types of error.” But I think this helped me put the size of the task of proving the neutrinos did go faster than light into a better perspective.
I don’t think this is an accurate assessment. One of the most obvious error forms was a statistical error (since they were using long neutrino pulses and then using careful statistics to get their average arrival time). That is eliminated in this experiment. Another possible error was that detection of a neutrino could interfere with the chances of detecting other neutrinos which could distort the actual v. observed average (this is a known problem with some muon detector designs). This seemed unlikely, but is also eliminated by the short pulses. They also used this as an opportunity to deal with some other timing issues. Overall, a lot of possible error sources have now been dealt with.
My understanding of the chances is that in the situation above, the chances are still very low until they deal with almost all of the chances of error, and are still low even then.
For instance, dealing with 4 of 9 different sources of error, the calculations I did gave a chance of them being correct is around 1.78%. If they deal with 8 of the 9 different sources of error, they’re still only around an 8.33% chance of being correct. (Assuming I calculated correctly as well.)
Also, I do want to clarify/reiterate that I wasn’t trying for that much accuracy. 9 sources of equally likely error are a gross simplification, and I’m not a physicist. I didn’t even count individual explanations of error to get to 9 so that assumption was itself probably influenced by heuristic/bias. (most likely that (11*9)+1=100.) It was more of a rough guess because I jumped to conclusions in previous threads and wanted to try to think about it at least a little bit more in this one before establishing an initial position.
All that being said, I’m definitely glad they can address multiple possible sources of error at once. If they do it correctly, that should greatly speed up the turn around time to finding out more about this.
Are you allowed to make bets with Karma upvotes? For instance, is it reasonable to propose “You upvote me once right now. If they confirm that Neutrino’s are traveling faster then the speed of light, You remove the upvote you gave me and I will upvote you 89 times.”
On the one hand, that sounds like an abuse of the karma system. But on the other hand, it also sounds somehow more fun/appropriate than a money bet, and I feel if you manage to sucessfully predict FTL this far out you deserve 89 upvotes anyway.
Can other people weigh in on whether this is a good/bad idea?
This definitely sounds like an abuse of the karma system. With this, people could reach high karma levels just by betting, as even if they’re wrong there’s no downside to this bet.
It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for “person worth paying attention to” as making good posts.
The last time I looked at prediction book the allowed values were integers 0 − 100 which makes it impossible to really use it for this. Here the meaningful values are is it .00001 or is it .0000000001?
Miley Cyrus is claiming > 1%, so your objection to PB does not apply. MC might like to distinguish between 1.1% and 1.0%, but this is minor.
If you’re recording claims, not betting at odds, then rounding to zero is not a big deal. No one is going to make a million predictions at 1 in a million odds. One can enter it as 0 on PB and add a comment of precise probability. It is plausible that people want to make thousands of predictions at 1 in 1000, but this is an unimportant complaint until lots of people are making thousands of predictions at percent granularity.
An advantage of PB over bilateral bets is that it encourages people to state their true probabilities and avoid the zero-sum game of setting odds. A well-populated market does this, too.
Really? I’m starting to think maybe they might be correct.
The reason for why I posted this is because I was interested in the reactions it would evoke. It seems that many people here think that any information whatsoever is valuable and that one should update on that evidence.
It is very interesting to see how many people here are very skeptical of those results, even though they are based on comparatively hard evidence and they were signed by 180 top-notch scientists. Many of the same people give higher estimates, while demanding none or little evidence, for something that sounds as simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
I think many people here have updated their belief, I did. My initial prior was very low, though, so I still retain a very low probability for FTL neutrinos.
simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
Natural language isn’t a great metric in this context. Also, recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
...recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn’t true for FTL phenomena because many people are aware of how unlikely that possibility is.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Some people think that complexity issues are even more fundamental than the laws of physics.
Sure. And I’m probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don’t see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can’t even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Well, but that’s not an unreasonable position. If I don’t have strong evidence either way on a question I should move my estimates close to 50%, That’s in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.
SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they’re generated. See here for more. Note that I, like the author of the post I’ve linked, do not believe that this finding will stand up. It’s just that if it does stand up, it will be because the constant velocity assumption is wrong.
If I don’t have strong evidence either way on a question I should move my estimates close to 50%...
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.
On a blog elsewhere discussing this, I saw someone claiming this result vindicates esoteric mysticism and mocking the physicists. Clicking through, you seem to be doing something similar.
First of all, I am NOT “mocking” anyone. If the theory isn’t right, isn’t right. Back to the drawing board in that case.
With some anthropomorphizing one could say that “the Nature is mocking”. But not me, sire! I’ve just never liked SR/GR too much, due to the Ehrenfest paradox.
Something has to give, the experiment doesn’t want too, Ehrenfest is also stubborn.
First, the Ehrenfest paradox has been resolved adequately when GR is involved, as explained in the Wikipedia article, so it is unlikely to be relevant here.
Second, every physicist knows that SR is incomplete (it leads to GR, which doesn’t play well with QM), so a more accurate theory has to come along eventually. That said, not many people expected that a straightforward low-energy measurement like the OPERA one might be enough to show the limits of local Lorentz invariance. Of course, it in no way invalidates relativity, whether you like it or not, just outlines its potential limits.
Really? I’m starting to think maybe they might be correct.
I was thinking about this earlier today, having read/posted about the original findings and the first papers of “Here are all the ways that the original findings might be incorrect.”, and then this announcement this morning.
For simplicity, let’s assume someone’s initial probabilities are:
There is a 1% chance they were correct.
There is an 11% chance they made a Type A error.
There is an 11% chance they made a Type B error.
There is an 11% chance they made a Type C error.
There is an 11% chance they made a Type D error.
There is an 11% chance they made a Type E error.
There is an 11% chance they made a Type F error.
There is an 11% chance they made a Type G error.
There is an 11% chance they made a Type H error.
There is an 11% chance they made a Type I error.
This paper establishes that there is essentially a negligible chance they made a Type A error. But there are still 8 ways they could be wrong. If I’m doing the calculations correctly, this means they now have about a 1.12% chance of being correct from the perspective of that someone, because these findings also mean it is more likely that they made a Type B-I error as well.
This is a gross simplification of the problem. There are other certainly other possibilities than “Correct” and “9 equally likely types of error.” But I think this helped me put the size of the task of proving the neutrinos did go faster than light into a better perspective.
I don’t think this is an accurate assessment. One of the most obvious error forms was a statistical error (since they were using long neutrino pulses and then using careful statistics to get their average arrival time). That is eliminated in this experiment. Another possible error was that detection of a neutrino could interfere with the chances of detecting other neutrinos which could distort the actual v. observed average (this is a known problem with some muon detector designs). This seemed unlikely, but is also eliminated by the short pulses. They also used this as an opportunity to deal with some other timing issues. Overall, a lot of possible error sources have now been dealt with.
My understanding of the chances is that in the situation above, the chances are still very low until they deal with almost all of the chances of error, and are still low even then.
For instance, dealing with 4 of 9 different sources of error, the calculations I did gave a chance of them being correct is around 1.78%. If they deal with 8 of the 9 different sources of error, they’re still only around an 8.33% chance of being correct. (Assuming I calculated correctly as well.)
Also, I do want to clarify/reiterate that I wasn’t trying for that much accuracy. 9 sources of equally likely error are a gross simplification, and I’m not a physicist. I didn’t even count individual explanations of error to get to 9 so that assumption was itself probably influenced by heuristic/bias. (most likely that (11*9)+1=100.) It was more of a rough guess because I jumped to conclusions in previous threads and wanted to try to think about it at least a little bit more in this one before establishing an initial position.
All that being said, I’m definitely glad they can address multiple possible sources of error at once. If they do it correctly, that should greatly speed up the turn around time to finding out more about this.
Are you willing to make a 1:89 bet that they will eventually be proven incorrect?
Are you allowed to make bets with Karma upvotes? For instance, is it reasonable to propose “You upvote me once right now. If they confirm that Neutrino’s are traveling faster then the speed of light, You remove the upvote you gave me and I will upvote you 89 times.”
On the one hand, that sounds like an abuse of the karma system. But on the other hand, it also sounds somehow more fun/appropriate than a money bet, and I feel if you manage to sucessfully predict FTL this far out you deserve 89 upvotes anyway.
Can other people weigh in on whether this is a good/bad idea?
This definitely sounds like an abuse of the karma system. With this, people could reach high karma levels just by betting, as even if they’re wrong there’s no downside to this bet.
They should include downvotes in the bet, so that every possible outcome is zero-sum.
It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for “person worth paying attention to” as making good posts.
I think we should just have a separate prediction market if for some reason we’d rather not use predictionbook.
The last time I looked at prediction book the allowed values were integers 0 − 100 which makes it impossible to really use it for this. Here the meaningful values are is it .00001 or is it .0000000001?
I liked this fellow’s take.
Miley Cyrus is claiming > 1%, so your objection to PB does not apply. MC might like to distinguish between 1.1% and 1.0%, but this is minor.
If you’re recording claims, not betting at odds, then rounding to zero is not a big deal. No one is going to make a million predictions at 1 in a million odds. One can enter it as 0 on PB and add a comment of precise probability. It is plausible that people want to make thousands of predictions at 1 in 1000, but this is an unimportant complaint until lots of people are making thousands of predictions at percent granularity.
An advantage of PB over bilateral bets is that it encourages people to state their true probabilities and avoid the zero-sum game of setting odds. A well-populated market does this, too.
This is (minus the specific numbers, of course, but you too were using them as examples) exactly how I see it.
The most likely error—that of wrong baseline—has not been addressed, so I don’t have noticeably improved credence. This is a very small update.
The reason for why I posted this is because I was interested in the reactions it would evoke. It seems that many people here think that any information whatsoever is valuable and that one should update on that evidence.
It is very interesting to see how many people here are very skeptical of those results, even though they are based on comparatively hard evidence and they were signed by 180 top-notch scientists. Many of the same people give higher estimates, while demanding none or little evidence, for something that sounds as simple as faster than light phenomena when formulated in natural language, namely recursive self-improvement.
I think many people here have updated their belief, I did. My initial prior was very low, though, so I still retain a very low probability for FTL neutrinos.
Natural language isn’t a great metric in this context. Also, recursive self-improvement doesn’t in any obvious way require changing our understanding of the laws of physics.
Some people think that complexity issues are even more fundamental than the laws of physics. On what basis do people believe that recursive self-improvement would be uncontrollably fast? It is simply easy to believe because it is a vague concept and none of those people have studied the relevant math. The same isn’t true for FTL phenomena because many people are aware of how unlikely that possibility is.
The same people who are very skeptical in the case of faster than light neutrinos just make up completely unfounded probability estimates about the risks associated with recursive self-improvement because it is easy to do so, because there is no evidence either way.
Sure. And I’m probably one of the people here who is most vocal about computational complexity issues limiting what recursive self-improvement can do. But even then, I don’t see them as necessarily in the same category. Keep in mind, {L, P, NP, co-NP, PSPACE, EXP} being all distinct are conjectural claims. We can’t even prove that L != NP at this point. And in order for this to produce barriers to recursive self-improvement one would likely need even stronger claims.
Well, but that’s not an unreasonable position. If I don’t have strong evidence either way on a question I should move my estimates close to 50%, That’s in contrast to the FTL issue where we have about a hundred years worth of evidence all going in one direction, and that evidence includes other observations involving neutrinos.
SN_1987 shows that neutrinos travel at the speed of light almost all of the time but does not rule out that they might have velocities that exceed that of light very briefly at the moment they’re generated. See here for more. Note that I, like the author of the post I’ve linked, do not believe that this finding will stand up. It’s just that if it does stand up, it will be because the constant velocity assumption is wrong.
That would be more than enough to devote a big chunk of the world’s resources on friendly AI research, given the associated utility. But you can’t just make up completely unfounded conjectures, then claim that we don’t have evidence either way but that the utility associated with a negative outcome is huge and we should therefore take it seriously. Because that reasoning will ultimately make you privilege random high-utility outcomes over theories based on empirical evidence.
Throwing out a theory as powerful and successful as relativity would require very powerful evidence, and at this point the evidence doesn’t fall that way at all.
On the other hand, the lower bound for GAI becoming a very serious problem is very low. Simply by dropping the price of peak human intelligence down to material and energy costs of a human (break no laws unless one hold the mind is amaterial) would result in massive social displacement that would require serious planning beforehand. I don’t think it is very likely that we’d see an AI that can laugh at exp-space problems, but all it needs to be is to be too smart to be easily controlled to mess everything up.
1 month ago I said it’s time to reconsider and was downvoted.
Testing it once again.
On a blog elsewhere discussing this, I saw someone claiming this result vindicates esoteric mysticism and mocking the physicists. Clicking through, you seem to be doing something similar.
First of all, I am NOT “mocking” anyone. If the theory isn’t right, isn’t right. Back to the drawing board in that case.
With some anthropomorphizing one could say that “the Nature is mocking”. But not me, sire! I’ve just never liked SR/GR too much, due to the Ehrenfest paradox.
Something has to give, the experiment doesn’t want too, Ehrenfest is also stubborn.
First, the Ehrenfest paradox has been resolved adequately when GR is involved, as explained in the Wikipedia article, so it is unlikely to be relevant here.
Second, every physicist knows that SR is incomplete (it leads to GR, which doesn’t play well with QM), so a more accurate theory has to come along eventually. That said, not many people expected that a straightforward low-energy measurement like the OPERA one might be enough to show the limits of local Lorentz invariance. Of course, it in no way invalidates relativity, whether you like it or not, just outlines its potential limits.