I think your definition is really a definition of powerful things (which is of course extremely relevant!).
I’d had some incomplete thoughts in this direction. I’d taken a slightly different tack to you. I’ll paste the relevant notes in.
Descriptions (or models) of systems vary in their complexity and their accuracy. Descriptions which are simpler and more accurate are preferred. Good descriptions are those which are unusually accurate for their level of complexity. For example ‘spherical’ is a good description of the shape of the earth, because it’s much more accurate than other descriptions of that length.
Often we want to describe subsystems. To work out how good such descriptions are we can ask how good the implied description of the whole system is, if we add a perfect description of the rest of the system.
Definition: The agency of a subsystem is the degree to which good models of that system predict its behaviour in terms of high-level effects on the world around.
Note this definition is not that precise: it replaces a difficult notion (agency) with several other imprecise notions (degree to which; good models of the that system; high-level effects). My suggestion is that while still awkward, they are more tractable than agent. I shy away from giving explicit forms, but I think this should generally be possible and indeed I could give guesses in several cases, but at the moment questions about precise functional forms seem a distraction from the architecture. Also note that this definition is continuous rather than binary.
Proposition: Very simple systems cannot have high degrees of agency. This is because if the system in its entirety admits a short description, you can’t do much better by appealing to motivation.
Observation: Some subsystems may have high agency just with respect to a limited set of environments. A chess-playing program has high agency when attached to a game of chess (and we care about the outcome), and low agency otherwise. A subsystem of an AI may have high agency when properly embedded in the AI, and low agency if cut off from its tools and levers.
Comment: this definition picks out agents, but also picks out powerful agents. Giving someone an army increases their agency. I’m not sure whether this is a desirable feature. If we wanted to abstract away from that, we could do something like:
Define power: the degree to which a subsystem has large effects on systems it is embedded in.
I think your definition is really a definition of powerful things (which is of course extremely relevant!).
I’d had some incomplete thoughts in this direction. I’d taken a slightly different tack to you. I’ll paste the relevant notes in.
Interesting. Some of this may be relevant why I post the “models as definitions” post.