This appendix struck me as exceedingly useless when I first encountered it. Most of the suggestions follow the pattern of “Find the right balance between two extremes”, but don’t give enough context to figure out where that balance is.
It’s like he talked to the experts, got the fact that a lot of what they were doing was tacit knowledge that gave them a feel for this sort of thing, but didn’t do any of the modeling work to then pull out the models that actually underlie the tacit knowledge.
I’d be curious if anyone has improved their calibration using these guidelines? Personally, I got much more mileage out of How to Measure Anything’s 5 calibration strategies.
Edit: someone reminded me that Tetlock or his publishers might not like me reproducing large parts of his book on LW, but if anyone wants to send me a PM we can figure something out.
This isn’t the only section of Superforecasting I intend to reproduce on LW, as there are more important parts. I’ve taken the time to transcribe the parts I found most important. But Tetlock did the modeling work corresponding to each of these commandments as laid out in the book (each commandment corresponds to a chapter of the book). If you can highlight what parts you’d like to zoom in on, I’m happy to transcribe parts of the book referring to specific commandments if you like.
I’d be interested in the parts that you felt most improved your calibration. Personally most of what I got from the book was about how effective forecasting tournaments were, what their limits where, and how to run them effectively. I got very little in terms of better calibration.
I didn’t get much in the way which improved my calibration either. I don’t think the most valuable parts of the book are about improving individual calibration directly. I see the book as a guide on how to literally become a superforecaster. Unfortunately that takes so much time for most individuals it’s infeasible, so the book wasn’t written that way. Why I’m uploading parts of *Superforecasting* in the first place is because the ones I’m uploading are the ones which are most relevant for my creation of a system within the rationality/EA communities of a giant forecasting machine. It could include forecasting tournaments. If you what you’re after is improved calibration, maybe I won’t have to talk about the book, because I intend to get more rationalists to learn by just *doing* it, instead of reading.
This appendix struck me as exceedingly useless when I first encountered it. Most of the suggestions follow the pattern of “Find the right balance between two extremes”, but don’t give enough context to figure out where that balance is.
It’s like he talked to the experts, got the fact that a lot of what they were doing was tacit knowledge that gave them a feel for this sort of thing, but didn’t do any of the modeling work to then pull out the models that actually underlie the tacit knowledge.
I’d be curious if anyone has improved their calibration using these guidelines? Personally, I got much more mileage out of How to Measure Anything’s 5 calibration strategies.
Edit: someone reminded me that Tetlock or his publishers might not like me reproducing large parts of his book on LW, but if anyone wants to send me a PM we can figure something out.
This isn’t the only section of Superforecasting I intend to reproduce on LW, as there are more important parts. I’ve taken the time to transcribe the parts I found most important. But Tetlock did the modeling work corresponding to each of these commandments as laid out in the book (each commandment corresponds to a chapter of the book). If you can highlight what parts you’d like to zoom in on, I’m happy to transcribe parts of the book referring to specific commandments if you like.
I’d be interested in the parts that you felt most improved your calibration. Personally most of what I got from the book was about how effective forecasting tournaments were, what their limits where, and how to run them effectively. I got very little in terms of better calibration.
I didn’t get much in the way which improved my calibration either. I don’t think the most valuable parts of the book are about improving individual calibration directly. I see the book as a guide on how to literally become a superforecaster. Unfortunately that takes so much time for most individuals it’s infeasible, so the book wasn’t written that way. Why I’m uploading parts of *Superforecasting* in the first place is because the ones I’m uploading are the ones which are most relevant for my creation of a system within the rationality/EA communities of a giant forecasting machine. It could include forecasting tournaments. If you what you’re after is improved calibration, maybe I won’t have to talk about the book, because I intend to get more rationalists to learn by just *doing* it, instead of reading.