I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
See also