re: calibration, it seems like what we want to do is ask ourselves what happens if an agent is asked lots of different probability questions, consider the the true probability as a function of probability stated by the agent, use some prior distribution (describing our uncertainty) on all such functions that the agent could have, update this prior using a finite set of answers we have seen the agent give and their correctness, and end up with a posterior distribution on functions (agent’s probability → true probability) from which we can get estimates of how over/underconfident the agent is at each probability level, and use those to determine what the agent “really means” when it says 90%; if the agent is overconfident at all probabilities then it’s “overconfident” period, if it’s underconfident at all probabilities then it’s “underconfident” period, if it’s over at some and under at some then I guess it’s just “misconfident”? (an agent could be usually overconfident in an environment that usually asked it difficult questions and usually underconfident in an environment that usually asked it easy questions, or vice versa) If we keep asking an agent that doesn’t learn the same question like in anon’s comment then that seems like a degenerate case. On first think it doesn’t seem like an agent’s calibration function necessarily depends on what questions you ask it; Solomonoff induction is well-calibrated in the long run in all environments (right?), and you could imagine an agent that was like SI but with all probability outputs twice as close to 1, say. Hope this makes any sense.
re: calibration, it seems like what we want to do is ask ourselves what happens if an agent is asked lots of different probability questions, consider the the true probability as a function of probability stated by the agent, use some prior distribution (describing our uncertainty) on all such functions that the agent could have, update this prior using a finite set of answers we have seen the agent give and their correctness, and end up with a posterior distribution on functions (agent’s probability → true probability) from which we can get estimates of how over/underconfident the agent is at each probability level, and use those to determine what the agent “really means” when it says 90%; if the agent is overconfident at all probabilities then it’s “overconfident” period, if it’s underconfident at all probabilities then it’s “underconfident” period, if it’s over at some and under at some then I guess it’s just “misconfident”? (an agent could be usually overconfident in an environment that usually asked it difficult questions and usually underconfident in an environment that usually asked it easy questions, or vice versa) If we keep asking an agent that doesn’t learn the same question like in anon’s comment then that seems like a degenerate case. On first think it doesn’t seem like an agent’s calibration function necessarily depends on what questions you ask it; Solomonoff induction is well-calibrated in the long run in all environments (right?), and you could imagine an agent that was like SI but with all probability outputs twice as close to 1, say. Hope this makes any sense.