Something like Bayesian/expected utility maximization seems useful for understanding agents and agency. However, there is the problem that expected utility theory doesn’t seem to predict anything in particular. We want a better response to “Expected utility theory doesn’t predict anything” that can describe the insight of EU theory re what agents are without being misinterpreted / without failing to constrain expectations at all technically.
Agents are policies with a high value of g. So, “EU theory” does “predict” something, although it’s a “soft” prediction (i.e. agency is a matter of degree).
Agents are policies with a high value of g. So, “EU theory” does “predict” something, although it’s a “soft” prediction (i.e. agency is a matter of degree).