Using “cruxiness” instead of operationalization for predictions.
One problem with making predictions is “operationalization.” A simple-seeming prediction can have endless edge cases.
For personal predictions, I often think it’s basically not worth worrying about it. Write something rough down, and then say “I know what I meant.” But, sometimes this is actually unclear, and you may be tempted to interpret a prediction in a favorable light. And at the very least it’s a bit unsatisfying for people who just aren’t actually sure what they meant.
One advantage of cruxy predictions (aside from “they’re actually particularly useful in the first place), is that if you know what decision a prediction was a crux for, you can judge ambiguous resolution based on “would this actually have changed my mind about the decision?”
(“Cruxiness instead of operationalization”is a bit overly click-baity. Realistically, you need at least some operationalization, to clarify for yourself what a prediction even means in the first place. But, I think maybe you can get away with more marginal fuzziness if you’re clear on how the prediction was supposed to inform your decisionmaking)
I would phrase this another way, which is that when making a prediction, you need to satisfice operationalization, but should seek to maximize cruxiness. Operationalization just needs to be good enough for the readers (including your future self) to get a good grasp of what you mean. Cruxiness is what makes the prediction worth thinking about.
Using “cruxiness” instead of operationalization for predictions.
One problem with making predictions is “operationalization.” A simple-seeming prediction can have endless edge cases.
For personal predictions, I often think it’s basically not worth worrying about it. Write something rough down, and then say “I know what I meant.” But, sometimes this is actually unclear, and you may be tempted to interpret a prediction in a favorable light. And at the very least it’s a bit unsatisfying for people who just aren’t actually sure what they meant.
One advantage of cruxy predictions (aside from “they’re actually particularly useful in the first place), is that if you know what decision a prediction was a crux for, you can judge ambiguous resolution based on “would this actually have changed my mind about the decision?”
(“Cruxiness instead of operationalization” is a bit overly click-baity. Realistically, you need at least some operationalization, to clarify for yourself what a prediction even means in the first place. But, I think maybe you can get away with more marginal fuzziness if you’re clear on how the prediction was supposed to inform your decisionmaking)
⚖ A year from now, in the three months prior, will I have used “cruxiness-as-operationalization” on a prediction, and found it helpful. (Raymond Arnold: 50%)
I would phrase this another way, which is that when making a prediction, you need to satisfice operationalization, but should seek to maximize cruxiness. Operationalization just needs to be good enough for the readers (including your future self) to get a good grasp of what you mean. Cruxiness is what makes the prediction worth thinking about.