You can, say, measure a belief on a scale from −1 to 1, where 0 is correct belief.
Then you could try calibrating the belief. Thing is, that sometimes you can calibrate it by monotonically decreasing it. Or you could monotonically decrease the absolute value of it. Or you could even make the calibration function oscillate between −1 and 1. Sometimes, more incorrect beliefs might even be desirable, since they may give you additional information about the landscape (this is where you can have a case where D1(t) > D2(t) and D1(t+1) < D2(t+1) ).
Is there an optimal function for belief calibration over time?
You can, say, measure a belief on a scale from −1 to 1, where 0 is correct belief.
Then you could try calibrating the belief. Thing is, that sometimes you can calibrate it by monotonically decreasing it. Or you could monotonically decrease the absolute value of it. Or you could even make the calibration function oscillate between −1 and 1. Sometimes, more incorrect beliefs might even be desirable, since they may give you additional information about the landscape (this is where you can have a case where D1(t) > D2(t) and D1(t+1) < D2(t+1) ).