It’s worth examining those contexts—I predict you will get a deeper understanding of “expected utility” and realize that this isn’t the most useful framing. It’s not so much “minimize regret” vs “expected utilty”, it’s “expected long-term vaguely-defined (illegible) utility” vs “expected visible near-term identifiable utility”
In some contexts it makes more sense to *minimize expected regret* rather than to *maximize expected utility*.
It’s worth examining those contexts—I predict you will get a deeper understanding of “expected utility” and realize that this isn’t the most useful framing. It’s not so much “minimize regret” vs “expected utilty”, it’s “expected long-term vaguely-defined (illegible) utility” vs “expected visible near-term identifiable utility”
Like what?