Because it acts in a manner that keeps it and the person of interest near each other.
So does a magnet. So does a homing missile. But a north pole does not love a south pole, and a missile does not love its target. Neither do rivers long to meet the sea, nor does fire long to ascend to heaven, nor do rocks desire the centre of the earth.
Why should I attribute emotions to you?
Because you experience them yourself, and I seem to be the same sort of thing as you are. Without any knowledge of what emotions are, that’s the best one can do.
This does not work for robots at the current state of the art.
True, but we can make robots better than that. The one I mentioned was capable of changing to be like that with the presence of a person. I don’t know much about that particular robot, but we can make ones that will change generally act in a manner that will put themselves in similar situation to the one they’re in at a given time, which is the best way I can define happiness, and we can make them happy when they’re near a specific person.
In any case, there is still a more basic problem. Why do you say that a magnet doesn’t love? I’m not saying that it does to any non-negligible extent, but it would be helpful to have a definition more precise than “do what humans do”.
This does not work for robots at the current state of the art.
Can you give an example of when it possible could work for robots? It sounds like you’re saying that it’s not love unless they’re conscious. While that is a necessary condition to make it an consciousness test, if that’s how you know it’s love than it’s circular. In order to prove it’s conscious it has to prove it can love. In order to prove it can love it must prove that it’s conscious.
Can you give an example of when it possible could work for robots?
No, because I don’t know what emotions are. I don’t believe anyone else does either. Neither does anyone know what consciousness is. Nobody even knows what an answer to the question would look like.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
I can describe the behavior of a magnet without resorting to such things, so I don’t posit them.
That’s not to say that I’m correct to ascribe them to systems with complicated behavior… I might be; I might not be. Merely to say that it’s what I seem to do. It’s what other humans seem to do as well… hence the common tendency to ascribe emotions and personalities to all sorts of complex phenomena.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
You mean like a psudorandom number generator?
Motives are easy to model. You just set what the system optimizes for. The part that’s hard to model is creativity.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
That’s a bad sign. My emotional behavior wouldn’t become fake due to your intelligence.
Because it acts in a manner that keeps it and the person of interest near each other.
Why should I attribute emotions to you?
So does a magnet. So does a homing missile. But a north pole does not love a south pole, and a missile does not love its target. Neither do rivers long to meet the sea, nor does fire long to ascend to heaven, nor do rocks desire the centre of the earth.
Because you experience them yourself, and I seem to be the same sort of thing as you are. Without any knowledge of what emotions are, that’s the best one can do.
This does not work for robots at the current state of the art.
True, but we can make robots better than that. The one I mentioned was capable of changing to be like that with the presence of a person. I don’t know much about that particular robot, but we can make ones that will change generally act in a manner that will put themselves in similar situation to the one they’re in at a given time, which is the best way I can define happiness, and we can make them happy when they’re near a specific person.
In any case, there is still a more basic problem. Why do you say that a magnet doesn’t love? I’m not saying that it does to any non-negligible extent, but it would be helpful to have a definition more precise than “do what humans do”.
Can you give an example of when it possible could work for robots? It sounds like you’re saying that it’s not love unless they’re conscious. While that is a necessary condition to make it an consciousness test, if that’s how you know it’s love than it’s circular. In order to prove it’s conscious it has to prove it can love. In order to prove it can love it must prove that it’s conscious.
No, because I don’t know what emotions are. I don’t believe anyone else does either. Neither does anyone know what consciousness is. Nobody even knows what an answer to the question would look like.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
I can describe the behavior of a magnet without resorting to such things, so I don’t posit them.
That’s not to say that I’m correct to ascribe them to systems with complicated behavior… I might be; I might not be. Merely to say that it’s what I seem to do. It’s what other humans seem to do as well… hence the common tendency to ascribe emotions and personalities to all sorts of complex phenomena.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
See also
You mean like a psudorandom number generator?
Motives are easy to model. You just set what the system optimizes for. The part that’s hard to model is creativity.
That’s a bad sign. My emotional behavior wouldn’t become fake due to your intelligence.