Is there an optimal function for belief calibration over time?
You can, say, measure a belief on a scale from −1 to 1, where 0 is correct belief.
Then you could try calibrating the belief. Thing is, that sometimes you can calibrate it by monotonically decreasing it. Or you could monotonically decrease the absolute value of it. Or you could even make the calibration function oscillate between −1 and 1. Sometimes, more incorrect beliefs might even be desirable, since they may give you additional information about the landscape (this is where you can have a case where D1(t) > D2(t) and D1(t+1) < D2(t+1) ).
Could you explain in more detail please? I have no idea what you’re talking about. For instance, how do you measure a belief on a −1 to 1 scale?
I expect that means underconfident / overconfident (something like the difference between the probability you gave, and the probability estimate you would give if you had access ot the same information but were perfectly calibrated)
m = p/50 − 1
Where p = percent chance or percent confidence.
You don’t need wrong beliefs to learn more.
Your argument for having incorrect beliefs doesn’t make any sense to me. So I guess my answer is “decreasing error over time is good, increasing bad.”
I’m guessing that there might be an optimal function for individuals for a while.
Assuming that what you mean is under vs. over confidence, some people will habitually be on one side of the scale, and others will be on the other.
Tracking whether one is habitually over or under confident (which might be different for various sorts of question) could lead to better calibration.