The whole point of assigning 50% probability to a claim is that you literally have no idea whether or not it will happen. So of course including X or ~X in any statement is going to be arbitrary. That’s what 50% means.
However, this is not solved by doubling up on your predictions, since now (by construction) your predictions are very dependent. I don’t understand the controversy about Scott getting 0⁄3 on 50% predictions—it even happens to perfectly calibrated people 1⁄8 times, let alone real humans. If you have a long list of statements you are 50% certain about, you have literally no reason to not put one side of an issue instead of the other side on your prediction list. If, however, afterwards it turns out that significantly less than half of your (arbitrarily chosen) sides turned out to be wrong, you probably aren’t very good at recognising when you are 50% confident (to make this more clear, imagine Scott had gotten 0⁄100 instead of 0⁄3 on his 50% predictions).
I don’t understand why there is so much resistance to the idea that stating “X with probability P(X)” also implies “~X with probability 1-P(X)”. The point of assigning probabilities to a prediction is that it represents your state of belief. Both statements uniquely specify the same state of belief, so to treat them differently based on which one you wrote down is irrational. Once you accept that these are the same statement, the conclusion in my post is inevitable, the mirror symmetry of the calibration curve becomes obvious, and given that symmetry, all lines must pass through the point (0.5,0.5).
Imagine the following conversation:
A: “I predict with 50% certainty that Trump will not win the nomination”.
B: “So, you think there’s a 50% chance that he will?”
A: “No, I didn’t say that. I said there’s a 50% chance that he won’t.”
B: “But you sort of did say it. You said the logically equivalent thing.”
A: “I said the logically equivalent thing, yes, but I said one and I left the other unsaid.”
B: “So if I believe there’s only a 10% chance Trump will win, is there any doubt that I believe there’s a 90% chance he won’t?
A: “Of course, nobody would disagree, if you said there’s a 10% chance Trump will win, then you also must believe that there’s a 90% chance that he won’t. Unless you think there’s some probability that he both will and will not win, which is absurd.”
B: “So if my state of belief that there’s a 10% chance of A necessarily implies I also believe a 90% chance of ~A, then what is the difference between stating one or the other?”
A: “Well, everyone agrees that makes sense for 90% and 10% confidence. It’s only for 50% confidence that the rules are different and it matters which one you don’t say.”
B: “What about for 50.000001% and 49.999999%?”
A: “Of course, naturally, that’s just like 90% and 10%.”
I think it would be silly to resist to the idea that “X with probability P(X)” is equivalent to “~X with probability 1-P(X)”. This statement is simply true.
However, it does not imply that prediction lists like this should include X and ~X as possible claims. To see this, let’s consider person A who only lists “X, probability P”, and person B who lists “X, probability P, and ~X, probability 1-P”. Clearly these two are making the exact same claim about the future of the world. If we use an entropy rule to grade both of these people, we will find that no matter the outcome person B will have exactly twice the entropy (penalty) that person A has, so if we afterwards want to compare results of two people, only one of whom doubled up on the predictions, there is an easy way to do it (just double the penalty for those who didn’t). So far so good: everything logically consistent, making the same claim about the world still easily lets you compare results aftewards. Nevertheless, there are two (related) things that need to be remarked, which is what I think all the controversy is over:
1) If, instead of the correct log weight rule, we use something stupid like a least-squares (or just eyeballing it per bracket), there is a significant difference between our people A and B above, precisely in their 50% predictions. For any probability assignment other than 50% the error rate at probability P and at 1-P are related and opposite, since getting a probability P prediction right (say, X), means getting a probability 1-P prediction wrong (~X). But for 50% these two get added up (with our stupid scoring rules) before being used to deduce calibration results. As a result we find that our doubler, player B, will always have exactly half of his 50% predictions right, which will score really well on stupid scoring rules (as an extreme example, to a naive scoring rule somebody who predicts 50% on every claim, regardless of logical constency, will seem to be perfectly calibrated).
2) Once we use a good scoring rule, i.e. the log rule, we can easily jump back and forth between people who double up on the claims and those who do not, as claimed/shown above.
In view of these two points I think that all of the magic is hidden in the scoring rule, not in the procedure used when recording the predictions. In other words, this doubling up does nothing useful. And since on calibration graphs people tend to think that getting half of your 50% predictions is really good, I say that the doubling version is actually slightly more misleading. The solution is clearly to use a proper scoring rule, and then you can do whatever you wish. But in reality it is best to not confuse your audience by artificially creating more dependencies between your predictions.
X and ~X will always receive the same score by both the logarithmic and least-squares scoring rules that I described in my post, although I certainly agree that the logarithm is a better measure. If you dispute that point, please provide a numerical example.
Because of the 1/N factor outside the sum, doubling predictions does not affect your calibration score (as it shouldn’t!). This factor is necessary or your score would only ever get successively worse the more predictions you make, regardless of how good they are. Thus, including X and ~X in the enumeration neither hurts nor helps your calibration score (regardless of whether using the log or the least-squares rule).
I agree that eyeballing a calibration graph is no good either. That was precisely the point I made with the lottery ticket example in the main post, where the prediction score is lousy but the graph looks perfect.
I agree that there’s no magic in the scoring rule. Doubling predictions is unnecessary for practical purposes; the reason I detail it here is to make a very important point about how calibration works in principle. This point needed to be made, in order to address the severe confusion that was apparent in the Slate Star Codex comment threads, because there was widespread disagreement about what exactly happens at 50%.
I think we both agree that there should be no controversy about this—however, go ahead and read through the SSC thread to see how many absurd solutions were being proposed! That’s what this post is responding to! What is made clear by enumerating both X and ~X in the bookkeeping of predictions—a move for which there is no possible objection, because it is no different than the original prediction, nor is does it affecting a proper score in any way—is that there is no reason to treat 50% as though it has special properties that are different than 50.01%, and there’s certainly no reason to think that there is any significance to the choice between writing “X, with probability P” and “~X, with probability 1-P”, even when P=50%.
If you still object to doubling the predictions, you can instead choose to take Scott’s predictions and replace all X all with ~X, and all P with 1-P. Do you agree that this new set should be just as representative of Scott’s calibration as his original prediction set?
The whole point of assigning 50% probability to a claim is that you literally have no idea whether or not it will happen. So of course including X or ~X in any statement is going to be arbitrary. That’s what 50% means.
However, this is not solved by doubling up on your predictions, since now (by construction) your predictions are very dependent. I don’t understand the controversy about Scott getting 0⁄3 on 50% predictions—it even happens to perfectly calibrated people 1⁄8 times, let alone real humans. If you have a long list of statements you are 50% certain about, you have literally no reason to not put one side of an issue instead of the other side on your prediction list. If, however, afterwards it turns out that significantly less than half of your (arbitrarily chosen) sides turned out to be wrong, you probably aren’t very good at recognising when you are 50% confident (to make this more clear, imagine Scott had gotten 0⁄100 instead of 0⁄3 on his 50% predictions).
I don’t understand why there is so much resistance to the idea that stating “X with probability P(X)” also implies “~X with probability 1-P(X)”. The point of assigning probabilities to a prediction is that it represents your state of belief. Both statements uniquely specify the same state of belief, so to treat them differently based on which one you wrote down is irrational. Once you accept that these are the same statement, the conclusion in my post is inevitable, the mirror symmetry of the calibration curve becomes obvious, and given that symmetry, all lines must pass through the point (0.5,0.5).
Imagine the following conversation:
A: “I predict with 50% certainty that Trump will not win the nomination”.
B: “So, you think there’s a 50% chance that he will?”
A: “No, I didn’t say that. I said there’s a 50% chance that he won’t.”
B: “But you sort of did say it. You said the logically equivalent thing.”
A: “I said the logically equivalent thing, yes, but I said one and I left the other unsaid.”
B: “So if I believe there’s only a 10% chance Trump will win, is there any doubt that I believe there’s a 90% chance he won’t?
A: “Of course, nobody would disagree, if you said there’s a 10% chance Trump will win, then you also must believe that there’s a 90% chance that he won’t. Unless you think there’s some probability that he both will and will not win, which is absurd.”
B: “So if my state of belief that there’s a 10% chance of A necessarily implies I also believe a 90% chance of ~A, then what is the difference between stating one or the other?”
A: “Well, everyone agrees that makes sense for 90% and 10% confidence. It’s only for 50% confidence that the rules are different and it matters which one you don’t say.”
B: “What about for 50.000001% and 49.999999%?”
A: “Of course, naturally, that’s just like 90% and 10%.”
B: “So what’s magic about 50%?”
I think it would be silly to resist to the idea that “X with probability P(X)” is equivalent to “~X with probability 1-P(X)”. This statement is simply true.
However, it does not imply that prediction lists like this should include X and ~X as possible claims. To see this, let’s consider person A who only lists “X, probability P”, and person B who lists “X, probability P, and ~X, probability 1-P”. Clearly these two are making the exact same claim about the future of the world. If we use an entropy rule to grade both of these people, we will find that no matter the outcome person B will have exactly twice the entropy (penalty) that person A has, so if we afterwards want to compare results of two people, only one of whom doubled up on the predictions, there is an easy way to do it (just double the penalty for those who didn’t). So far so good: everything logically consistent, making the same claim about the world still easily lets you compare results aftewards. Nevertheless, there are two (related) things that need to be remarked, which is what I think all the controversy is over:
1) If, instead of the correct log weight rule, we use something stupid like a least-squares (or just eyeballing it per bracket), there is a significant difference between our people A and B above, precisely in their 50% predictions. For any probability assignment other than 50% the error rate at probability P and at 1-P are related and opposite, since getting a probability P prediction right (say, X), means getting a probability 1-P prediction wrong (~X). But for 50% these two get added up (with our stupid scoring rules) before being used to deduce calibration results. As a result we find that our doubler, player B, will always have exactly half of his 50% predictions right, which will score really well on stupid scoring rules (as an extreme example, to a naive scoring rule somebody who predicts 50% on every claim, regardless of logical constency, will seem to be perfectly calibrated).
2) Once we use a good scoring rule, i.e. the log rule, we can easily jump back and forth between people who double up on the claims and those who do not, as claimed/shown above.
In view of these two points I think that all of the magic is hidden in the scoring rule, not in the procedure used when recording the predictions. In other words, this doubling up does nothing useful. And since on calibration graphs people tend to think that getting half of your 50% predictions is really good, I say that the doubling version is actually slightly more misleading. The solution is clearly to use a proper scoring rule, and then you can do whatever you wish. But in reality it is best to not confuse your audience by artificially creating more dependencies between your predictions.
X and ~X will always receive the same score by both the logarithmic and least-squares scoring rules that I described in my post, although I certainly agree that the logarithm is a better measure. If you dispute that point, please provide a numerical example.
Because of the 1/N factor outside the sum, doubling predictions does not affect your calibration score (as it shouldn’t!). This factor is necessary or your score would only ever get successively worse the more predictions you make, regardless of how good they are. Thus, including X and ~X in the enumeration neither hurts nor helps your calibration score (regardless of whether using the log or the least-squares rule).
I agree that eyeballing a calibration graph is no good either. That was precisely the point I made with the lottery ticket example in the main post, where the prediction score is lousy but the graph looks perfect.
I agree that there’s no magic in the scoring rule. Doubling predictions is unnecessary for practical purposes; the reason I detail it here is to make a very important point about how calibration works in principle. This point needed to be made, in order to address the severe confusion that was apparent in the Slate Star Codex comment threads, because there was widespread disagreement about what exactly happens at 50%.
I think we both agree that there should be no controversy about this—however, go ahead and read through the SSC thread to see how many absurd solutions were being proposed! That’s what this post is responding to! What is made clear by enumerating both X and ~X in the bookkeeping of predictions—a move for which there is no possible objection, because it is no different than the original prediction, nor is does it affecting a proper score in any way—is that there is no reason to treat 50% as though it has special properties that are different than 50.01%, and there’s certainly no reason to think that there is any significance to the choice between writing “X, with probability P” and “~X, with probability 1-P”, even when P=50%.
If you still object to doubling the predictions, you can instead choose to take Scott’s predictions and replace all X all with ~X, and all P with 1-P. Do you agree that this new set should be just as representative of Scott’s calibration as his original prediction set?