I think the main problem is that expected utility theory is in many ways our most well-developed framework for understanding agency, but, makes no empirical predictions, and in particular does not tie agency to other important notions of optimization we can come up with (and which, in fact, seem like they should be closely tied to agency).
I’m identifying one possible source of this disconnect.
The problem feels similar to trying to understand physical entropy without any uncertainty. So it’s like, we understand balloons at the atomic level, but we notice that how inflated they are seems to depend on the temperature of the air, but temperature is totally divorced from the atomic level (because we can’t understand entropy and thermodynamics without using any notion of uncertainty). So we have this concept of balloons and this separate concept of inflatedness, which really really should relate to each other, but we can’t bridge the gap because we’re not thinking about uncertainty in the right way.
I think the main problem is that expected utility theory is in many ways our most well-developed framework for understanding agency, but, makes no empirical predictions, and in particular does not tie agency to other important notions of optimization we can come up with (and which, in fact, seem like they should be closely tied to agency).
I’m identifying one possible source of this disconnect.
The problem feels similar to trying to understand physical entropy without any uncertainty. So it’s like, we understand balloons at the atomic level, but we notice that how inflated they are seems to depend on the temperature of the air, but temperature is totally divorced from the atomic level (because we can’t understand entropy and thermodynamics without using any notion of uncertainty). So we have this concept of balloons and this separate concept of inflatedness, which really really should relate to each other, but we can’t bridge the gap because we’re not thinking about uncertainty in the right way.
Damn this is really good