I’m confused. (As in, actually confused. The following should hopefwly point at what pieces I’m missing in order to understand what you mean by a “problem” for the notion.)
Vingean agency “disappears when we look at it too closely”
I don’t really get why this would be a problem. I mean, “agency” is an abstraction, and every abstraction becomes predictably useless once you can compute the lower layer perfectly, at least if you assume compute is cheap. Balloons!
Imagine you’ve never seen a helium balloon before, and you see it slowly soaring to the sky. You could have predicted this by using a few abstractions like density of gases and Archimedes’ principle. Alternatively, if you had the resources, you could make the identical prediction (with inconsequentially higher precision) by extrapolating from the velocities and weights of all the individual molecules, and computed that the sum of forces acting on the bottom of the balloon exceeds the sum acting on the top. I don’t see how the latter being theoretically possible implies a “problem” for abstractions like “density” and “Archimedes’ principle”.
I think the main problem is that expected utility theory is in many ways our most well-developed framework for understanding agency, but, makes no empirical predictions, and in particular does not tie agency to other important notions of optimization we can come up with (and which, in fact, seem like they should be closely tied to agency).
I’m identifying one possible source of this disconnect.
The problem feels similar to trying to understand physical entropy without any uncertainty. So it’s like, we understand balloons at the atomic level, but we notice that how inflated they are seems to depend on the temperature of the air, but temperature is totally divorced from the atomic level (because we can’t understand entropy and thermodynamics without using any notion of uncertainty). So we have this concept of balloons and this separate concept of inflatedness, which really really should relate to each other, but we can’t bridge the gap because we’re not thinking about uncertainty in the right way.
I’m confused. (As in, actually confused. The following should hopefwly point at what pieces I’m missing in order to understand what you mean by a “problem” for the notion.)
I don’t really get why this would be a problem. I mean, “agency” is an abstraction, and every abstraction becomes predictably useless once you can compute the lower layer perfectly, at least if you assume compute is cheap. Balloons!
Imagine you’ve never seen a helium balloon before, and you see it slowly soaring to the sky. You could have predicted this by using a few abstractions like density of gases and Archimedes’ principle. Alternatively, if you had the resources, you could make the identical prediction (with inconsequentially higher precision) by extrapolating from the velocities and weights of all the individual molecules, and computed that the sum of forces acting on the bottom of the balloon exceeds the sum acting on the top. I don’t see how the latter being theoretically possible implies a “problem” for abstractions like “density” and “Archimedes’ principle”.
I think the main problem is that expected utility theory is in many ways our most well-developed framework for understanding agency, but, makes no empirical predictions, and in particular does not tie agency to other important notions of optimization we can come up with (and which, in fact, seem like they should be closely tied to agency).
I’m identifying one possible source of this disconnect.
The problem feels similar to trying to understand physical entropy without any uncertainty. So it’s like, we understand balloons at the atomic level, but we notice that how inflated they are seems to depend on the temperature of the air, but temperature is totally divorced from the atomic level (because we can’t understand entropy and thermodynamics without using any notion of uncertainty). So we have this concept of balloons and this separate concept of inflatedness, which really really should relate to each other, but we can’t bridge the gap because we’re not thinking about uncertainty in the right way.
Damn this is really good