I was thinking about this earlier today, having read/posted about the original findings and the first papers of “Here are all the ways that the original findings might be incorrect.”, and then this announcement this morning.
For simplicity, let’s assume someone’s initial probabilities are:
There is a 1% chance they were correct.
There is an 11% chance they made a Type A error.
There is an 11% chance they made a Type B error.
There is an 11% chance they made a Type C error.
There is an 11% chance they made a Type D error.
There is an 11% chance they made a Type E error.
There is an 11% chance they made a Type F error.
There is an 11% chance they made a Type G error.
There is an 11% chance they made a Type H error.
There is an 11% chance they made a Type I error.
This paper establishes that there is essentially a negligible chance they made a Type A error. But there are still 8 ways they could be wrong. If I’m doing the calculations correctly, this means they now have about a 1.12% chance of being correct from the perspective of that someone, because these findings also mean it is more likely that they made a Type B-I error as well.
This is a gross simplification of the problem. There are other certainly other possibilities than “Correct” and “9 equally likely types of error.” But I think this helped me put the size of the task of proving the neutrinos did go faster than light into a better perspective.
I don’t think this is an accurate assessment. One of the most obvious error forms was a statistical error (since they were using long neutrino pulses and then using careful statistics to get their average arrival time). That is eliminated in this experiment. Another possible error was that detection of a neutrino could interfere with the chances of detecting other neutrinos which could distort the actual v. observed average (this is a known problem with some muon detector designs). This seemed unlikely, but is also eliminated by the short pulses. They also used this as an opportunity to deal with some other timing issues. Overall, a lot of possible error sources have now been dealt with.
My understanding of the chances is that in the situation above, the chances are still very low until they deal with almost all of the chances of error, and are still low even then.
For instance, dealing with 4 of 9 different sources of error, the calculations I did gave a chance of them being correct is around 1.78%. If they deal with 8 of the 9 different sources of error, they’re still only around an 8.33% chance of being correct. (Assuming I calculated correctly as well.)
Also, I do want to clarify/reiterate that I wasn’t trying for that much accuracy. 9 sources of equally likely error are a gross simplification, and I’m not a physicist. I didn’t even count individual explanations of error to get to 9 so that assumption was itself probably influenced by heuristic/bias. (most likely that (11*9)+1=100.) It was more of a rough guess because I jumped to conclusions in previous threads and wanted to try to think about it at least a little bit more in this one before establishing an initial position.
All that being said, I’m definitely glad they can address multiple possible sources of error at once. If they do it correctly, that should greatly speed up the turn around time to finding out more about this.
Are you allowed to make bets with Karma upvotes? For instance, is it reasonable to propose “You upvote me once right now. If they confirm that Neutrino’s are traveling faster then the speed of light, You remove the upvote you gave me and I will upvote you 89 times.”
On the one hand, that sounds like an abuse of the karma system. But on the other hand, it also sounds somehow more fun/appropriate than a money bet, and I feel if you manage to sucessfully predict FTL this far out you deserve 89 upvotes anyway.
Can other people weigh in on whether this is a good/bad idea?
This definitely sounds like an abuse of the karma system. With this, people could reach high karma levels just by betting, as even if they’re wrong there’s no downside to this bet.
It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for “person worth paying attention to” as making good posts.
The last time I looked at prediction book the allowed values were integers 0 − 100 which makes it impossible to really use it for this. Here the meaningful values are is it .00001 or is it .0000000001?
Miley Cyrus is claiming > 1%, so your objection to PB does not apply. MC might like to distinguish between 1.1% and 1.0%, but this is minor.
If you’re recording claims, not betting at odds, then rounding to zero is not a big deal. No one is going to make a million predictions at 1 in a million odds. One can enter it as 0 on PB and add a comment of precise probability. It is plausible that people want to make thousands of predictions at 1 in 1000, but this is an unimportant complaint until lots of people are making thousands of predictions at percent granularity.
An advantage of PB over bilateral bets is that it encourages people to state their true probabilities and avoid the zero-sum game of setting odds. A well-populated market does this, too.
I was thinking about this earlier today, having read/posted about the original findings and the first papers of “Here are all the ways that the original findings might be incorrect.”, and then this announcement this morning.
For simplicity, let’s assume someone’s initial probabilities are:
There is a 1% chance they were correct.
There is an 11% chance they made a Type A error.
There is an 11% chance they made a Type B error.
There is an 11% chance they made a Type C error.
There is an 11% chance they made a Type D error.
There is an 11% chance they made a Type E error.
There is an 11% chance they made a Type F error.
There is an 11% chance they made a Type G error.
There is an 11% chance they made a Type H error.
There is an 11% chance they made a Type I error.
This paper establishes that there is essentially a negligible chance they made a Type A error. But there are still 8 ways they could be wrong. If I’m doing the calculations correctly, this means they now have about a 1.12% chance of being correct from the perspective of that someone, because these findings also mean it is more likely that they made a Type B-I error as well.
This is a gross simplification of the problem. There are other certainly other possibilities than “Correct” and “9 equally likely types of error.” But I think this helped me put the size of the task of proving the neutrinos did go faster than light into a better perspective.
I don’t think this is an accurate assessment. One of the most obvious error forms was a statistical error (since they were using long neutrino pulses and then using careful statistics to get their average arrival time). That is eliminated in this experiment. Another possible error was that detection of a neutrino could interfere with the chances of detecting other neutrinos which could distort the actual v. observed average (this is a known problem with some muon detector designs). This seemed unlikely, but is also eliminated by the short pulses. They also used this as an opportunity to deal with some other timing issues. Overall, a lot of possible error sources have now been dealt with.
My understanding of the chances is that in the situation above, the chances are still very low until they deal with almost all of the chances of error, and are still low even then.
For instance, dealing with 4 of 9 different sources of error, the calculations I did gave a chance of them being correct is around 1.78%. If they deal with 8 of the 9 different sources of error, they’re still only around an 8.33% chance of being correct. (Assuming I calculated correctly as well.)
Also, I do want to clarify/reiterate that I wasn’t trying for that much accuracy. 9 sources of equally likely error are a gross simplification, and I’m not a physicist. I didn’t even count individual explanations of error to get to 9 so that assumption was itself probably influenced by heuristic/bias. (most likely that (11*9)+1=100.) It was more of a rough guess because I jumped to conclusions in previous threads and wanted to try to think about it at least a little bit more in this one before establishing an initial position.
All that being said, I’m definitely glad they can address multiple possible sources of error at once. If they do it correctly, that should greatly speed up the turn around time to finding out more about this.
Are you willing to make a 1:89 bet that they will eventually be proven incorrect?
Are you allowed to make bets with Karma upvotes? For instance, is it reasonable to propose “You upvote me once right now. If they confirm that Neutrino’s are traveling faster then the speed of light, You remove the upvote you gave me and I will upvote you 89 times.”
On the one hand, that sounds like an abuse of the karma system. But on the other hand, it also sounds somehow more fun/appropriate than a money bet, and I feel if you manage to sucessfully predict FTL this far out you deserve 89 upvotes anyway.
Can other people weigh in on whether this is a good/bad idea?
This definitely sounds like an abuse of the karma system. With this, people could reach high karma levels just by betting, as even if they’re wrong there’s no downside to this bet.
They should include downvotes in the bet, so that every possible outcome is zero-sum.
It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for “person worth paying attention to” as making good posts.
I think we should just have a separate prediction market if for some reason we’d rather not use predictionbook.
The last time I looked at prediction book the allowed values were integers 0 − 100 which makes it impossible to really use it for this. Here the meaningful values are is it .00001 or is it .0000000001?
I liked this fellow’s take.
Miley Cyrus is claiming > 1%, so your objection to PB does not apply. MC might like to distinguish between 1.1% and 1.0%, but this is minor.
If you’re recording claims, not betting at odds, then rounding to zero is not a big deal. No one is going to make a million predictions at 1 in a million odds. One can enter it as 0 on PB and add a comment of precise probability. It is plausible that people want to make thousands of predictions at 1 in 1000, but this is an unimportant complaint until lots of people are making thousands of predictions at percent granularity.
An advantage of PB over bilateral bets is that it encourages people to state their true probabilities and avoid the zero-sum game of setting odds. A well-populated market does this, too.
This is (minus the specific numbers, of course, but you too were using them as examples) exactly how I see it.
The most likely error—that of wrong baseline—has not been addressed, so I don’t have noticeably improved credence. This is a very small update.