Perhaps there’s some back story to this post that I missed, so forgive me if what I’m about to say has been discussed.
You might consider reading “Superforecasting: The Art and Science of Prediction,” by Philip Tetlock. Or go to the Good Judgment Project web site and watch the 5-part Superforecasting master class.
First, the question has to pass the clairvoyant test. Second, you might want to have some scheme for Bayesian updating your forecast. And then you’ll want to use Brier Scores (or something like them) to assess your accuracy.
If you know R, there’s actually a Brier score function you can use. But I can’t imagine it’s very difficult to set up in Excel.
Thanks. This is all very relevant. And no, there is no backstory, at least not that I shared anywhere.
You might consider reading “Superforecasting: The Art and Science of Prediction,” by Philip Tetlock. Or go to the Good Judgment Project web site and watch the 5-part Superforecasting master class.
Yes, I read “Superforcasting”. I didn’t know they had masterclass and will look it up. I suspect their teachings will be somewhat less useful for predictions of personal importance: different biases will be at play here. But it should be worthwhile to watch it anyway.
First, the question has to pass the clairvoyant test.
Actually, I think if you are going to assess your own predictions, you can afford the luxury of being a bit less specific, especially if the prediction is made in the short-term. For example, consider a made-up question:
“Will Adam be able to get back to cycling within a month [after a recent accident]?”
If Adam resumes cycling but it causes him considerable pain, I know that’s not what I intended when asking the original question. On the other hand, if Adam recovers fully but starts playing rugby instead of cycling because he discovers he enjoys it more, I know the answer to the intended question is “yes”. (The imprecise part of the question here is “be able to”, but as long as I can reliably recall the intention when writing those words, they cause no loss of precision.)
Second, you might want to have some scheme for Bayesian updating your forecast.
Hmm. For now I was planning to make my predictions once and forget about them until the outcome is known. I’m not sure I want to spend more effort on them, at least not so early into the project.
And then you’ll want to use Brier Scores (or something like them) to assess your accuracy.
If you know R, there’s actually a Brier score function you can use. But I can’t imagine it’s very difficult to set up in Excel.
Brier score would be great at telling me how accurate I am, but not what mistakes I’m making, at least based on my very limited understanding of the metric. As a basic analysis method, I was planning to group my predictions by the forecast probability (e.g., 0%-10%, 10%-20%, … ranges, or maybe ranges 1pp wide at the extremes and growing exponentially towards the centre, that would probably make more sense), and simply chart them grouped by tag. I must admit, my knowledge of statics has always been very poor, so I’m sure there is some better analysis/visualisation methodology.
That’s a very good point. What do you think about predicting events on which I might still have an impact? Those are some of the most important forecasts for me: I will decide whether to attempt something based on my predicted probability of success. But then my forecast might affect that probability which makes the whole thing much more complicated.
Agree! Tricky territory. I think it’s fair to take an outside view as a first cut (e.g. how many people survived Everest), then very carefully evaluate if the reference class is relevant. Yudkowsky writes about this quite a bit in places cannot recall which particular place.
Thanks. I will be on the lookout for relevant writings. I’m slowly going through Yudkowsky’s books/posts, so I’m sure I will stumble on it sooner or later.
Perhaps there’s some back story to this post that I missed, so forgive me if what I’m about to say has been discussed.
You might consider reading “Superforecasting: The Art and Science of Prediction,” by Philip Tetlock. Or go to the Good Judgment Project web site and watch the 5-part Superforecasting master class.
First, the question has to pass the clairvoyant test. Second, you might want to have some scheme for Bayesian updating your forecast. And then you’ll want to use Brier Scores (or something like them) to assess your accuracy.
If you know R, there’s actually a Brier score function you can use. But I can’t imagine it’s very difficult to set up in Excel.
Again, sorry if I’m stating the obvious.
Thanks. This is all very relevant. And no, there is no backstory, at least not that I shared anywhere.
Yes, I read “Superforcasting”. I didn’t know they had masterclass and will look it up. I suspect their teachings will be somewhat less useful for predictions of personal importance: different biases will be at play here. But it should be worthwhile to watch it anyway.
Actually, I think if you are going to assess your own predictions, you can afford the luxury of being a bit less specific, especially if the prediction is made in the short-term. For example, consider a made-up question:
“Will Adam be able to get back to cycling within a month [after a recent accident]?”
If Adam resumes cycling but it causes him considerable pain, I know that’s not what I intended when asking the original question. On the other hand, if Adam recovers fully but starts playing rugby instead of cycling because he discovers he enjoys it more, I know the answer to the intended question is “yes”. (The imprecise part of the question here is “be able to”, but as long as I can reliably recall the intention when writing those words, they cause no loss of precision.)
Hmm. For now I was planning to make my predictions once and forget about them until the outcome is known. I’m not sure I want to spend more effort on them, at least not so early into the project.
Brier score would be great at telling me how accurate I am, but not what mistakes I’m making, at least based on my very limited understanding of the metric. As a basic analysis method, I was planning to group my predictions by the forecast probability (e.g., 0%-10%, 10%-20%, … ranges, or maybe ranges 1pp wide at the extremes and growing exponentially towards the centre, that would probably make more sense), and simply chart them grouped by tag. I must admit, my knowledge of statics has always been very poor, so I’m sure there is some better analysis/visualisation methodology.
(Probably unnecessary word of caution) do not forecast your own behavior due to risk of reduced agency.
That’s a very good point. What do you think about predicting events on which I might still have an impact? Those are some of the most important forecasts for me: I will decide whether to attempt something based on my predicted probability of success. But then my forecast might affect that probability which makes the whole thing much more complicated.
Agree! Tricky territory. I think it’s fair to take an outside view as a first cut (e.g. how many people survived Everest), then very carefully evaluate if the reference class is relevant. Yudkowsky writes about this quite a bit in places cannot recall which particular place.
Thanks. I will be on the lookout for relevant writings. I’m slowly going through Yudkowsky’s books/posts, so I’m sure I will stumble on it sooner or later.