How about this: a Bayesian will always predict that she is perfectly calibrated, even though she knows the theorems proving she isn’t.
Wanna bet? Literally. Have a Bayesian to make and a whole bunch of predictions and then offer her bets with payoffs based on what apparent calibration the results will reflect. See which bets she accepts and which she refuses.
I was about to suggest we could just bet raw ego points by publicly posting here… but then I realised I prove my point just by playing.
It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump. There are few cases in which I would expect my calibration to be superior to what you could predict with complete knowledge of the distribution.
It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump.
I am convinced in full generality that being offered the option of a bet can only provide utility >= 0. So if the punch line is ‘insuficiently constrained rationality’ then yes, the joke is on me!
And yes, I suspect trying to get my head around that paper would (will) be rather costly! I’m a goddam programmer. :P
Wanna bet? Literally. Have a Bayesian to make and a whole bunch of predictions and then offer her bets with payoffs based on what apparent calibration the results will reflect. See which bets she accepts and which she refuses.
Are you volunteering?
Sure. :)
But let me warn you… I actually predict my calibration to be pretty darn awful.
We need a trusted third party.
Find a candidate.
I was about to suggest we could just bet raw ego points by publicly posting here… but then I realised I prove my point just by playing.
It should be obvious, by the way, that if the predictions you have me make pertain to black boxes that you construct then I would only bet if the odds gave a money pump. There are few cases in which I would expect my calibration to be superior to what you could predict with complete knowledge of the distribution.
Phooey. There goes plan A.
;)
Plan B involves trying to use some nasty posterior inconsistency results, so don’t think you’re out of the woods yet.
I am convinced in full generality that being offered the option of a bet can only provide utility >= 0. So if the punch line is ‘insuficiently constrained rationality’ then yes, the joke is on me!
And yes, I suspect trying to get my head around that paper would (will) be rather costly! I’m a goddam programmer. :P
I volunteer, if y’all tell me what to do.
I volunteer.