-”If all that The Rock is cooking is setting the probability of every possible change to epsilon, then when the first of those events happens his Briar score is suddenly going to explode and he is going to lose all his Bayes points.”
I think you are thinking of logarithmic scoring. With the Brier score, a wrong 100% prediction is scored the same as just four (wrong or right) 50% predictions, hardly an “explo[sion]”.
Probably just because its definition is simpler than logarithmic scoring. Do you think it is obvious that logarithmic scoring is better? That doesn’t seem obvious to me.
The Brier score becomes inadequate for very rare (or very frequent) events, because it does not sufficiently discriminate between small changes in forecast that are significant for rare events.
I guess it’s more probability-centric than odds-centric.
OK, so basically it’s well-known that this only works for predictions that aren’t super rare, so it wouldn’t be used to score things that only happen 0.1% of the time on average (which is the only way anyone could be 99.9% accurate).
-”If all that The Rock is cooking is setting the probability of every possible change to epsilon, then when the first of those events happens his Briar score is suddenly going to explode and he is going to lose all his Bayes points.”
I think you are thinking of logarithmic scoring. With the Brier score, a wrong 100% prediction is scored the same as just four (wrong or right) 50% predictions, hardly an “explo[sion]”.
Wait, why would anyone use that? So confused.
Probably just because its definition is simpler than logarithmic scoring. Do you think it is obvious that logarithmic scoring is better? That doesn’t seem obvious to me.
From Wikipedia:
I guess it’s more probability-centric than odds-centric.
OK, so basically it’s well-known that this only works for predictions that aren’t super rare, so it wouldn’t be used to score things that only happen 0.1% of the time on average (which is the only way anyone could be 99.9% accurate).