an agent’s goals might not be linearly decomposable over possible worlds due to risk-aversion
Risk aversion doesn’t violate additive separability. E.g., for u(x)=xa we always get E[u(x)]=∑ipixai whether a=1(risk neutrality) or a=1/2 (risk aversion). Though some alternatives to expected utility, like Buchak’s REU theory, can allow certain sources of risk aversion to violate separability.
when features have fixed marginal utility, rather than being substitutes
Perfect substitutes have fixed marginal utility. E.g., v(x,y)=x+2y always has marginal utilities of 1 and 2.
I’ll focus on linearly decomposable goals which can be evaluated by adding together evaluations of many separate subcomponents. More decomposable goals are simpler
There’s an old literature on separability in consumer theory that’s since been tied to bounded rationality. One move that’s made is to grant weak separability accross goups of objects—features—to rationalise the behaviour of optimising accross groups first, and within groups second. Pretnar et al (2021) describe how this can arise from limited cognitive resources.
Two nitpicks and a reference:
Risk aversion doesn’t violate additive separability. E.g., for u(x)=xa we always get E[u(x)]=∑ipixai whether a=1(risk neutrality) or a=1/2 (risk aversion). Though some alternatives to expected utility, like Buchak’s REU theory, can allow certain sources of risk aversion to violate separability.
Perfect substitutes have fixed marginal utility. E.g., v(x,y)=x+2y always has marginal utilities of 1 and 2.
There’s an old literature on separability in consumer theory that’s since been tied to bounded rationality. One move that’s made is to grant weak separability accross goups of objects—features—to rationalise the behaviour of optimising accross groups first, and within groups second. Pretnar et al (2021) describe how this can arise from limited cognitive resources.