Awesome post! I’m very ignorant of the precision-estimation literature so I’m going to be asking dumb questions here.
First of all, I feel like a precision function should take some kind of “acceptable loss” parameter. From what I gather, to specify the precision you need some threshold in your algorithm(s) for how much accuracy loss you’re willing to tolerate.
More fundamentally, though, I’m trying to understand what exactly we want to measure. The list of desired properties of a precision function feel somewhat pulled out of thin air, and I’d feel more comfortable with a philosophical understanding of where these properties come from. So let’s say we have a set S of possible states/trajectories of the world, the world provides us with some evidence E, and we’re interested in P(A|E) for some event A⊆S. Maybe reality has some fixed P(A|E) out there, but we’re not privy to that, so we’re forced to use some “hyperprior” (am I using that word right?) on probability measures over S. After conditioning on E, we get some probability distribution on P(A|E), which participants in a prediction market will take the expected value of as their answer. The precision is trying to quantify something like the standard deviation of this probability distribution on values of P(A|E), right?
P.S. This is entirely a skill issue on my part but I’m not sure what symbols you’re using for precision function and perturbation function. Detexify was of no use. Feel free to enlighten me!
If my interpretation of precision function is correct then I guess my main concern is this: how are we reaching inside the minds of the predictors to see what their distribution on P(A|E) is? Like, imagine we have an urn with black and red marbles in it and we have a prediction market on the probability that a uniformly randomly chosen marble will be red. Let’s say that two people participated in this prediction market: Alice and Bob. Alice estimated there to be a 0.3269230769 (or approximately 17⁄52) chance of the marble being red because she saw the marbles being put in and there were 17 red marbles and 52 marbles total. Bob estimated there to be a 0.3269230769 chance of the marble being red because he felt like it. Bob is clearly providing false precision while Alice is providing entirely justified precision. However, no matter which way the urn draw goes, the input tuple (0.3269230769, 0) or (0.3269230769, 1) will be the same for both participants and thus the precision returned by any precision function will be the same. This feels to me like a fundamental disconnect between what we want to measure and what we are measuring. Am I mistaken in my understanding? Thanks!
Awesome post! I’m very ignorant of the precision-estimation literature so I’m going to be asking dumb questions here.
First of all, I feel like a precision function should take some kind of “acceptable loss” parameter. From what I gather, to specify the precision you need some threshold in your algorithm(s) for how much accuracy loss you’re willing to tolerate.
More fundamentally, though, I’m trying to understand what exactly we want to measure. The list of desired properties of a precision function feel somewhat pulled out of thin air, and I’d feel more comfortable with a philosophical understanding of where these properties come from. So let’s say we have a set S of possible states/trajectories of the world, the world provides us with some evidence E, and we’re interested in P(A|E) for some event A⊆S. Maybe reality has some fixed P(A|E) out there, but we’re not privy to that, so we’re forced to use some “hyperprior” (am I using that word right?) on probability measures over S. After conditioning on E, we get some probability distribution on P(A|E), which participants in a prediction market will take the expected value of as their answer. The precision is trying to quantify something like the standard deviation of this probability distribution on values of P(A|E), right?
P.S. This is entirely a skill issue on my part but I’m not sure what symbols you’re using for precision function and perturbation function. Detexify was of no use. Feel free to enlighten me!
If my interpretation of precision function is correct then I guess my main concern is this: how are we reaching inside the minds of the predictors to see what their distribution on P(A|E) is? Like, imagine we have an urn with black and red marbles in it and we have a prediction market on the probability that a uniformly randomly chosen marble will be red. Let’s say that two people participated in this prediction market: Alice and Bob. Alice estimated there to be a 0.3269230769 (or approximately 17⁄52) chance of the marble being red because she saw the marbles being put in and there were 17 red marbles and 52 marbles total. Bob estimated there to be a 0.3269230769 chance of the marble being red because he felt like it. Bob is clearly providing false precision while Alice is providing entirely justified precision. However, no matter which way the urn draw goes, the input tuple (0.3269230769, 0) or (0.3269230769, 1) will be the same for both participants and thus the precision returned by any precision function will be the same. This feels to me like a fundamental disconnect between what we want to measure and what we are measuring. Am I mistaken in my understanding? Thanks!