Not an explicit exercise and likely mentioned or alluded to somewhere in the forecasting stuff, but remember that any time you are about to run any sort of outside view/reference class forecast you have the opportunity to get some calibration by first trying to answer the question yourself based on what you know, including your confidence bounds on your model. When you subsequently look up the data and get any surprises, you can ask yourself why you are surprised which helps you to figure out what generators the experts have that you don’t.
Since you can do this all the time (how many google searches do you do a day?) it gives you a lot more data than one time exercises.
It could use a clever anchor phrase for memory purposes. Open to suggestions.
Not an explicit exercise and likely mentioned or alluded to somewhere in the forecasting stuff, but remember that any time you are about to run any sort of outside view/reference class forecast you have the opportunity to get some calibration by first trying to answer the question yourself based on what you know, including your confidence bounds on your model. When you subsequently look up the data and get any surprises, you can ask yourself why you are surprised which helps you to figure out what generators the experts have that you don’t.
Since you can do this all the time (how many google searches do you do a day?) it gives you a lot more data than one time exercises.
It could use a clever anchor phrase for memory purposes. Open to suggestions.