For humans (and probably generally for embedded agents), I endorse acknowledging that probabilities are a wrong but useful model. For any given prediction, the possibility set is incomplete, and the weights are only estimations with lots of variance. I don’t think that a set of distributions fixes this, though in some cases it can capture the model variance better than a single summary can.
EV maximization can only ever be an estimate. No matter HOW you come up with your probabilities and beliefs about value-of-outcome, you’ll be wrong fairly often. But that doesn’t make it useless—there’s no better legible framework I know of. Illegible frameworks (heuristics embedded in the giant neural network in your head) are ALSO useful, and IMO best results come from blending intuition and calculation, and from being humble and suspicious when they diverge greatly.
For humans (and probably generally for embedded agents), I endorse acknowledging that probabilities are a wrong but useful model. For any given prediction, the possibility set is incomplete, and the weights are only estimations with lots of variance. I don’t think that a set of distributions fixes this, though in some cases it can capture the model variance better than a single summary can.
EV maximization can only ever be an estimate. No matter HOW you come up with your probabilities and beliefs about value-of-outcome, you’ll be wrong fairly often. But that doesn’t make it useless—there’s no better legible framework I know of. Illegible frameworks (heuristics embedded in the giant neural network in your head) are ALSO useful, and IMO best results come from blending intuition and calculation, and from being humble and suspicious when they diverge greatly.