First of all, these are definitely not opposites, and game-theoretic agency is about much more than just “executing a knowable strategy”. The basic point of embedded agency and the like is that, because of reflectivity etc, idealized game-theoretic agent behavior can only exist (or even be approximated) in the real world at an abstract level which throws out some information about the underlying territory. Game theoretic agency is still the original goal of the exercise; reflectivity and whatnot enter the picture because they’re a constraint, not because they’re part of what we mean by “agentiness”.
In terms of human rationality—of the sort in the sequences—the recurring theme is that we want to approximate idealized game-theoretic agency as best we can, despite complicated models, reflectivity, etc. Again, game-theoretic agency is the original goal; approximations enter the picture because complexity is a constraint. Nothing about that is contradictory.
Tying it back to the OP: we have a low-level model which may be too complex for “the agent” to represent/reason about directly. We abstract that into a high-level model. The agent is then an idealized game-theoretic agent within the high-level model, but the high-level model itself is lossy. The agent’s own model coincides with the high-level model—that’s the meaning of the clouds. But that still leaves the question of whether and to what extent the high-level model accurately reflects the low-level model—that’s the abstraction part.
Interesting take. When I see “agenty” used on this site and related blogs, it usually seems to map to something like self-actualization or percieved locus of control, more psychological frameworks. I’d not thought too much about how different (or similar) it was to “agent” in decision theory and game-theoretical usage, which is not about the feeling of control, but about behavior selection according to legible reasoning.
Again, decision theory/game theory are not about “executing a knowable strategy” or “behavior selection according to legible reasoning”. They’re about what goal-directed behavior means, especially under partial information and in the presence of other goal-directed systems. The theory of decisions/games is the theory of how to achieve goals. Whether a legible strategy achieves a goal is mostly incidental to decision/game theory—there are some games where legibility/illegibility could convey an advantage, but that’s not really something that most game theorists study.
First of all, these are definitely not opposites, and game-theoretic agency is about much more than just “executing a knowable strategy”. The basic point of embedded agency and the like is that, because of reflectivity etc, idealized game-theoretic agent behavior can only exist (or even be approximated) in the real world at an abstract level which throws out some information about the underlying territory. Game theoretic agency is still the original goal of the exercise; reflectivity and whatnot enter the picture because they’re a constraint, not because they’re part of what we mean by “agentiness”.
In terms of human rationality—of the sort in the sequences—the recurring theme is that we want to approximate idealized game-theoretic agency as best we can, despite complicated models, reflectivity, etc. Again, game-theoretic agency is the original goal; approximations enter the picture because complexity is a constraint. Nothing about that is contradictory.
Tying it back to the OP: we have a low-level model which may be too complex for “the agent” to represent/reason about directly. We abstract that into a high-level model. The agent is then an idealized game-theoretic agent within the high-level model, but the high-level model itself is lossy. The agent’s own model coincides with the high-level model—that’s the meaning of the clouds. But that still leaves the question of whether and to what extent the high-level model accurately reflects the low-level model—that’s the abstraction part.
Interesting take. When I see “agenty” used on this site and related blogs, it usually seems to map to something like self-actualization or percieved locus of control, more psychological frameworks. I’d not thought too much about how different (or similar) it was to “agent” in decision theory and game-theoretical usage, which is not about the feeling of control, but about behavior selection according to legible reasoning.
Again, decision theory/game theory are not about “executing a knowable strategy” or “behavior selection according to legible reasoning”. They’re about what goal-directed behavior means, especially under partial information and in the presence of other goal-directed systems. The theory of decisions/games is the theory of how to achieve goals. Whether a legible strategy achieves a goal is mostly incidental to decision/game theory—there are some games where legibility/illegibility could convey an advantage, but that’s not really something that most game theorists study.