I usually think of this in terms of Dennett’s concept of the intentional stance, according to which there is no fact of the matter of whether something is an agent or not. But there is a fact of the matter of whether we can usefully predict its behavior by modeling it as if it was an agent with some set of beliefs and goals.
That sounds awfully lot like asserting agency to be a mind-projecting fallacy.
That seems maybe true. What’s the problem you see with that?