Note that your observation does not generalize to more complex logodds-distributions. Here is a simple counterexample:
Let’s say that L(B|E)=1+x with chance 2⁄3, and L(B|E)=1-2x with chance 1⁄3. It still holds that EL(B|E)=1. But the expected probability EP(B|E) is now not a monotone function of x. It has a global minimum at x=2.
Indeed. It looks like the effect I described occurs when the meta uncertainty is over a small range of log-odds values relative to the posterior log-odds, and there is another effect that could produce arbitrary expected probabilities given the right distribution over an arbitrarily large range of values. For any probability p, let L(B|E) = average + (1-p)*x with probability p and L(B|E) = average—p*x with probability (1-p), and then the limit of the expected probability as x approaches infinity is p.
It has a global minimum at x=2.
I notice that this is where |1 + x| = |1 − 2x|. That might be interesting to look into.
(Possible more rigorous and explicit math to follow when I can focus on it more)
I let L(B|E) be uniform from x-s/2 to x+s/2 and got that P(B|E) =
where A is the odds if L(B|E)=x.
In the limit as s goes to infinity, it looks like the interesting pieces are a term that’s the log of the prior probability dropping off as s grows linearly, plus a term that eventually looks like (1/s)*ln(e^(s/2))=1/2 which means we approach 1⁄2.
Note that your observation does not generalize to more complex logodds-distributions. Here is a simple counterexample:
Let’s say that L(B|E)=1+x with chance 2⁄3, and L(B|E)=1-2x with chance 1⁄3. It still holds that EL(B|E)=1. But the expected probability EP(B|E) is now not a monotone function of x. It has a global minimum at x=2.
x EP(B|E)
0 0.66666666666666663
1 0.64444444444444438
2 0.62962962962962954
3 0.63755199049316691
4 0.64904862579281186
5 0.65706002898985361
Indeed. It looks like the effect I described occurs when the meta uncertainty is over a small range of log-odds values relative to the posterior log-odds, and there is another effect that could produce arbitrary expected probabilities given the right distribution over an arbitrarily large range of values. For any probability p, let L(B|E) = average + (1-p)*x with probability p and L(B|E) = average—p*x with probability (1-p), and then the limit of the expected probability as x approaches infinity is p.
I notice that this is where |1 + x| = |1 − 2x|. That might be interesting to look into.
(Possible more rigorous and explicit math to follow when I can focus on it more)
I let L(B|E) be uniform from x-s/2 to x+s/2 and got that P(B|E) =
where A is the odds if L(B|E)=x. In the limit as s goes to infinity, it looks like the interesting pieces are a term that’s the log of the prior probability dropping off as s grows linearly, plus a term that eventually looks like (1/s)*ln(e^(s/2))=1/2 which means we approach 1⁄2.