If you then deconfuse agency as “its behavior is reliably predictable by the intentional strategy”, I then have the same question: “why is its behavior reliably predictable by the intentional strategy?” Sure, its behavior in the set of circumstances we’ve observed is predictable by the intentional strategy, but none of those circumstances involved human extinction; why expect that the behavior will continue to be reliably predictable in settings where the prediction is “causes human extinction”?
Overall, I generally agree with the intentional stance as an explanation of the human concept of agency, but I do not think it can be used as a foundation for AI risk arguments. For that, you need something else, such as mechanistic implementation details, empirical trend extrapolations, analyses of the inductive biases of AI systems, etc.
The requirement for its behavior being “reliably predictable” by the intentional strategy doesn’t necessarily limit us to postdiction in already-observed situations; we could require our intentional stance model of the system’s behavior to generalize OOD. Obviously, to build such a model that generalizes well, you’ll want it to mirror the actual causal dynamics producing the agent’s behavior as closely as possible, so you need to make further assumptions about the agent’s cognitive architecture, inductive biases, etc. that you hope will hold true in that specific context (e.g. human minds or prosaic AIs). However, these are additional assumptions needed to answer question of why an intentional stance model will generalize OOD, not replacing the intentional stance as the foundation of our concept of agency, because, as you say, it explains the human concept of agency, and we’re worried that AI systems will fail catastrophically in ways that look agentic and goal-directed… to us.
You are correct that having only the intentional stance is insufficient to make the case for AI risk from “goal-directed” prosaic systems, but having it as the foundation of what we mean by “agent” clarifies what more is needed to make the sufficient case—what about the mechanics of prosaic systems will allow us to build intentional stance models of their behavior that generalize well OOD?
The requirement for its behavior being “reliably predictable” by the intentional strategy doesn’t necessarily limit us to postdiction in already-observed situations; we could require our intentional stance model of the system’s behavior to generalize OOD. Obviously, to build such a model that generalizes well, you’ll want it to mirror the actual causal dynamics producing the agent’s behavior as closely as possible, so you need to make further assumptions about the agent’s cognitive architecture, inductive biases, etc. that you hope will hold true in that specific context (e.g. human minds or prosaic AIs). However, these are additional assumptions needed to answer question of why an intentional stance model will generalize OOD, not replacing the intentional stance as the foundation of our concept of agency, because, as you say, it explains the human concept of agency, and we’re worried that AI systems will fail catastrophically in ways that look agentic and goal-directed… to us.
You are correct that having only the intentional stance is insufficient to make the case for AI risk from “goal-directed” prosaic systems, but having it as the foundation of what we mean by “agent” clarifies what more is needed to make the sufficient case—what about the mechanics of prosaic systems will allow us to build intentional stance models of their behavior that generalize well OOD?
Yeah, I agree with all of that.