But that doesn’t output 1 for estimates of 100%, 0 for estimates of 50%, and -inf (or even −1) to estimates of 0%, or even something that can be normalized to either of those triples.
Huh. I thought that wasn’t a Bayesian score (not maximized by estimating correctly), but doing the math the maximum is at the right point for 1⁄4, 1⁄100, 3⁄4, and 99⁄100, and 1⁄2.
But that doesn’t output 1 for estimates of 100%, 0 for estimates of 50%, and -inf (or even −1) to estimates of 0%, or even something that can be normalized to either of those triples.
Here’s the “normalized” version: f(x)=1+log2(x), g(x)=1+log2(1-x) (i.e. scale f and g by 1/log(2) and add 1).
Now f(1)=1, f(.5)=0, f(0)=-Inf ; g(1)=-Inf, g(.5)=0, g(0)=1.
Ok?
Huh. I thought that wasn’t a Bayesian score (not maximized by estimating correctly), but doing the math the maximum is at the right point for 1⁄4, 1⁄100, 3⁄4, and 99⁄100, and 1⁄2.