This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
See also