I also got a Fatebook account thanks to this post.
This post lays out a bunch of tools that address what I’ve previously found lacking in personal forecasts, so thanks for the post! I’ve definitely gone observables-first, forecasted primarily the external world (rather than e.g. “if I do X, will I afterwards think it was a good thing to do?”), and have had the issue of feeling vaguely neutral about everything you touched on in Frame 3.
I’ll now be trying these techniques out and see whether that helps.
...and as I wrote that sentence, I came to think about how Humans are not automatically strategic—particularly that we do not “ask ourselves what we’re trying to achieve” and “ask ourselves how we could tell if we achieved it”—and that this is precisely the type of thing you were using Fatebook for in this post. So, I actually sat down, thought about it and made a few forecasts:
So far I’ve made a couple of forecasts of the form “if I go to event X, will I think it was clearly worth it” that already resolved, and felt like I got useful data points to calibrate my expectations on.
I also got a Fatebook account thanks to this post.
This post lays out a bunch of tools that address what I’ve previously found lacking in personal forecasts, so thanks for the post! I’ve definitely gone observables-first, forecasted primarily the external world (rather than e.g. “if I do X, will I afterwards think it was a good thing to do?”), and have had the issue of feeling vaguely neutral about everything you touched on in Frame 3.
I’ll now be trying these techniques out and see whether that helps.
...and as I wrote that sentence, I came to think about how Humans are not automatically strategic—particularly that we do not “ask ourselves what we’re trying to achieve” and “ask ourselves how we could tell if we achieved it”—and that this is precisely the type of thing you were using Fatebook for in this post. So, I actually sat down, thought about it and made a few forecasts:
⚖ Two months from now, will I think I’m clearly better at operationalizing cruxy predictions about my future mental state? (Olli Järviniemi: 80%)
⚖ Two months from now, will I think my “inner simulator” makes majorly less in-hindsight-blatantly-obvious mistakes? (Olli Järviniemi: 60%)
Two months from now, will I be regularly predicting things relevant to my long-term goals and think this provides value? (Olli Järviniemi: 25%)
And noticing that making these forecasts was cognitively heavy and not fluent at all, I made one more forecast:
⚖ Two months from now, will I be able to fluently use forecasting as a part of my workflow? (Olli Järviniemi: 20%)
So far I’ve made a couple of forecasts of the form “if I go to event X, will I think it was clearly worth it” that already resolved, and felt like I got useful data points to calibrate my expectations on.
Woo, great. :)
Whether this works out or not for you, I quite appreciate you laying out the details. Hope it’s useful for you!