I think you’re conflating impressiveness of predictions with calibration of predictions.
Could you give an example where the claim is that 50% predictions are less meaningful than 10% predictions?
I mean, these things? A very similar claim to “10% are less meaningful than 50%” which was due to conflating impressiveness and calibration.
It may be that we’re just talking past each other?
Yes, exactly: this post conflates accuracy and calibration. Thus it is a poor antidote to people who make that mistake.
I do think we’re talking past each other now as I don’t know how this relates to our previous discussion.
At any rate I don’t think the discussion is that high value to the rest of the post so I think I’ll just leave it here.
I mean, these things? A very similar claim to “10% are less meaningful than 50%” which was due to conflating impressiveness and calibration.
It may be that we’re just talking past each other?
Yes, exactly: this post conflates accuracy and calibration. Thus it is a poor antidote to people who make that mistake.
I do think we’re talking past each other now as I don’t know how this relates to our previous discussion.
At any rate I don’t think the discussion is that high value to the rest of the post so I think I’ll just leave it here.