For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don’t have a way to host this, right now, though.)
Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).
Also, I’d be happy to have pointers to your calibration apps or others you’ve found useful.
Thanks Emile,
Is there anything you’d like to see added?
For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don’t have a way to host this, right now, though.)
Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).
Also, I’d be happy to have pointers to your calibration apps or others you’ve found useful.