This makes an important point that I find myself consistently referring to—almost none of the confidence in predictions, even inside the rationalist community, is based on actual calibration data. Experts forecast poorly, and we need to stop treating expertise or argumentation as strong stand-alone reasons to accept claims which are implicitly disputed by forecasts.
On the other hand, I think that this post focused far too much on Eliezer. In fact, there are relatively few people in the community who have significant forecasting track records, and this community does tremendously better than most. This leads to lots of strong opinions based on “understanding” which refuse to defer to forecaster expectations or even engage much with why they would differ.
This makes an important point that I find myself consistently referring to—almost none of the confidence in predictions, even inside the rationalist community, is based on actual calibration data. Experts forecast poorly, and we need to stop treating expertise or argumentation as strong stand-alone reasons to accept claims which are implicitly disputed by forecasts.
On the other hand, I think that this post focused far too much on Eliezer. In fact, there are relatively few people in the community who have significant forecasting track records, and this community does tremendously better than most. This leads to lots of strong opinions based on “understanding” which refuse to defer to forecaster expectations or even engage much with why they would differ.