Taken literally, the concept of “confidence in a probability” is incoherent.
Why? I thought the way Lumifer expressed it in terms of Bayesian hierarchical models was pretty coherent. It might be turtles all the way down as he says, and it might be hard to use it in a rigorous mathematical way, but at least it’s coherent. (And useful, in my experience.)
Another concept is how much you think your probability estimate will change as you encounter new evidence.
This is pretty much what I meant in my original post by writing:
I usually think of the height of the curve at any given point as representing how likely I think it is that I’ll discover evidence that will change my belief. So for a low bell curve centered on .6, I think of that as meaning that I’d currently assign the belief a probability of around .6 but I also consider it likely that I’ll discover evidence (if I look for it) that can change my opinion significantly in any direction.
But expressing it in terms of how likely my beliefs are to change given more evidence is probably better. Or to say it in yet another way: how strong new evidence would need to be for me to change my estimate.
It seems like the scheme I’ve been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?
Why? I thought the way Lumifer expressed it in terms of Bayesian hierarchical models was pretty coherent. It might be turtles all the way down as he says, and it might be hard to use it in a rigorous mathematical way, but at least it’s coherent. (And useful, in my experience.)
This is pretty much what I meant in my original post by writing:
But expressing it in terms of how likely my beliefs are to change given more evidence is probably better. Or to say it in yet another way: how strong new evidence would need to be for me to change my estimate.
It seems like the scheme I’ve been proposing here is not a common one. So how do people usually express the obvious difference between a probability estimate of 50% for a coin flip (unlikely to change with more evidence) vs. a probability estimate of 50% for AI being developed by 2050 (very likely to change with more evidence)?