It’s all well and good to say you don’t maximize utility for one reason or another, but when somebody tells me that they actually maximize “minimum expected utility”, my first inclination is to tell them that they’ve misplaced their “utility” label.
My first inclination when somebody says they don’t maximize utility is that they’ve misplaced their “utility” label… can you give an example of a (reasonable?) agent which really couldn’t be (reasonably?) reframed to some sort of utility maximizer?
My first inclination when somebody says they don’t maximize utility is that they’ve misplaced their “utility” label… can you give an example of a (reasonable?) agent which really couldn’t be (reasonably?) reframed to some sort of utility maximizer?