The trouble with this approach is that it breaks down when we want to describe uncertain events that are unique. The question of who will win the 2016 presidential election is one that we still want to be able to describe with probabilities, even though it doesn’t make great sense to aggregate probabilities across different presidential elections.
In order to explain what a single probability means, instead of what calibration means, you need to describe it as a measure of uncertainty. The three main ‘correctness’ questions then are 1) how well it corresponds to the actual future, 2) how well it corresponds to known clues at the time, and 3) how precisely I’m reporting it.
That’s correct: my approach doesn’t generalize to unique/rare events. The ‘naive’ or frequentist approach seems to work for weather predictions, and creates a simple intuition that’s easier IMO to explain to laymen than more general approaches.
The trouble with this approach is that it breaks down when we want to describe uncertain events that are unique. The question of who will win the 2016 presidential election is one that we still want to be able to describe with probabilities, even though it doesn’t make great sense to aggregate probabilities across different presidential elections.
In order to explain what a single probability means, instead of what calibration means, you need to describe it as a measure of uncertainty. The three main ‘correctness’ questions then are 1) how well it corresponds to the actual future, 2) how well it corresponds to known clues at the time, and 3) how precisely I’m reporting it.
That’s correct: my approach doesn’t generalize to unique/rare events. The ‘naive’ or frequentist approach seems to work for weather predictions, and creates a simple intuition that’s easier IMO to explain to laymen than more general approaches.
What do you mean?
What Vaniver said: my approach breaks down for unique events. Edited for clarity.