A big part of the answer for me is something like this Scott Alexander post about the probability of X within your model versus the probability that your model is miscalibrated in a relevant way. Given how shaky our models of the world are, this alone makes it hard for me to push past 99% on many questions, especially those that require predicting human decisions.
A big part of the answer for me is something like this Scott Alexander post about the probability of X within your model versus the probability that your model is miscalibrated in a relevant way. Given how shaky our models of the world are, this alone makes it hard for me to push past 99% on many questions, especially those that require predicting human decisions.