The appropriate thing to do is apply (an estimate of) Bayes rule. You don’t need to try to specify every possible outcome in advance; that is hopeless and a waste of effort. Rather, you extract the information that you got about what happened to create an improved prediction of what would have happened, and assign credit appropriately.
First, let’s look at what we’re trying to do. If you’re trying to make good predictions, you want
p(X | “X”)
to be as close to 1 as possible, where X is what happens, and “X” is what you say will happen.
If an unbiased observer initially would have predicted, say, p(you win at fencing) = 0.5, then initially the estimate of your accuracy for that statement would be 0.5; and after winning 14 touches in a row it would probably be somewhere around 0.999, which is nearly as good as it having been true (unless your accuracy is already in the 99.9%+ range, at which point this doesn’t help refine the estimate of your accuracy).
So, you don’t need to ask more precise questions. You do need to honestly evaluate in aborted trials whether there were dramatic shifts in the apparent probability of the outcome. When doing these things in real life, actually going through Bayesian mathematics is probably not worthwhile, but keeping the gist of it certainly is.
The appropriate thing to do is apply (an estimate of) Bayes rule. You don’t need to try to specify every possible outcome in advance; that is hopeless and a waste of effort. Rather, you extract the information that you got about what happened to create an improved prediction of what would have happened, and assign credit appropriately.
First, let’s look at what we’re trying to do. If you’re trying to make good predictions, you want p(X | “X”) to be as close to 1 as possible, where X is what happens, and “X” is what you say will happen.
If an unbiased observer initially would have predicted, say, p(you win at fencing) = 0.5, then initially the estimate of your accuracy for that statement would be 0.5; and after winning 14 touches in a row it would probably be somewhere around 0.999, which is nearly as good as it having been true (unless your accuracy is already in the 99.9%+ range, at which point this doesn’t help refine the estimate of your accuracy).
So, you don’t need to ask more precise questions. You do need to honestly evaluate in aborted trials whether there were dramatic shifts in the apparent probability of the outcome. When doing these things in real life, actually going through Bayesian mathematics is probably not worthwhile, but keeping the gist of it certainly is.