I’m writing a book about epistemology. It’s about The Problem of the Criterion, why it’s important, and what it has to tell us about how we approach knowing the truth.
I’ve also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
Back when I tried playing some calibration games, I found I was not able to get successfully calibrated above 95%. At that point I start making errors from things like “misinterpreting the question” or “randomly hit the wrong button” and things like that.
The math is not quite right on this, but from this I’ve adopted a personal 5% error margin policy, this seems ot practically be about the limit of my ability to make accurate predictions, and it’s served me well.