I was initially extremely disappointed with the reception of this post. After publishing it, I thought it was the best thing I’ve ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)
If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn’t really doing that. Actually, I was trying to do something much more ambitious: solving the ‘what is a probability’ problem. And I genuinely think I’ve succeeded. I used to have this slight feeling of confusion every time I’ve thought about this because I simultaneously believed that predictions can be better or worse and that talking about the ‘correct probability’ is silly, but had no way to reconcile the two. But in fact, I think there’s a simple ground truth that solves the philosophical problem entirely.
I’ve now changed the title and put a note at the start. So anyway, if anyone didn’t click on it because of the title or low karma, I’m hereby virtually resubmitting it.
(Datapoint on initial perception: at the time, I had glanced at the post, but didn’t vote or comment, because I thought Steven was in the right in the precipitating discussion and the “a prediction can assign less probability-mass to the actual outcome than another but still be better” position seemed either confused or confusingly phrased to me; I would say that a good model can make a bad prediction about a particular event, but the model still has to take a hit.)
I was initially extremely disappointed with the reception of this post. After publishing it, I thought it was the best thing I’ve ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)
If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn’t really doing that. Actually, I was trying to do something much more ambitious: solving the ‘what is a probability’ problem. And I genuinely think I’ve succeeded. I used to have this slight feeling of confusion every time I’ve thought about this because I simultaneously believed that predictions can be better or worse and that talking about the ‘correct probability’ is silly, but had no way to reconcile the two. But in fact, I think there’s a simple ground truth that solves the philosophical problem entirely.
I’ve now changed the title and put a note at the start. So anyway, if anyone didn’t click on it because of the title or low karma, I’m hereby virtually resubmitting it.
(Datapoint on initial perception: at the time, I had glanced at the post, but didn’t vote or comment, because I thought Steven was in the right in the precipitating discussion and the “a prediction can assign less probability-mass to the actual outcome than another but still be better” position seemed either confused or confusingly phrased to me; I would say that a good model can make a bad prediction about a particular event, but the model still has to take a hit.)