My own practical version of the Turing test is “can we be friends?” (It used to be “can we fall in love?”) Once an AI passes a test like that I think the question of whether it’s “genuinely” conscious should be dissolved.
Actually, scratch that: either way, I think the question of whether something is “genuinely” conscious should be dissolved.
I mostly think “is X conscious?” can be usefully replaced by a combination of “does X think?” and “does X experience pain/pleasure/etc.?” In both cases, answering the question is a matter of making inferences from incomplete knowledge, and it’s always possible to be wrong, but it’s also usually possible to be legitimately confident. If there’s anything else being asked, I don’t know what it is.
I don’t think that’s dissolving far enough. The questions those questions are stand-ins for, I think, are questions like “does X deserve legal consideration?” or “does X deserve moral consideration?” and we might as well be explicit about this.
I don’t think those questions are mere stand-ins. I think the answers to “does X deserve legal consideration?” or “does X deserve moral consideration?” depend heavily on “Is X conscious?” and “Does X experience pain/pleasure?” That is, if we answer “Is X conscious?” and “Does X experience pain/pleasure?” then we can answer “does X deserve legal consideration?” and “does X deserve moral consideration?”
If “Is X conscious?” and “Does X experience pain/pleasure?” simply stand-ins for “does X deserve legal consideration?” or “does X deserve moral consideration?”, then if we answered the latter two we’d stop caring about the former. I don’t think that’s so. There are still very interesting, very deep scientific questions to be answered about just what it means when we say something is conscious.
The problem is that I, for one, don’t know what the question “Is X conscious?” means and I’m not sure how to judge “Does X experience pain/pleasure?” in a non-biological context either. Nor has anyone else ever convinced me they know the answers to these questions. Still, it does seem as if neurobiology is making slow progress on these questions so they’re probably not intractable or meaningless. When all is said and done, they may not mean exactly what we vaguely feel they mean today; but I suspect that “conscious” will be more like the concept of “atom” than the concept of “ether”. I.e. we’ll recognize a clear connection between the original use of the word and the much more refined and detailed understanding we eventually come to. On the other hand, I could be wrong about that; and consciousness could turn out to be as useless a concept as ether or phlogiston.
Yeah, I waffled about this and ultimately decided not to say that, but I’m not confident.
I’m not really clear on whether what people are really asking is (e.g) “does X deserve moral consideration?,” or whether it just happens to be true that people believe (e.g.) that it’s immoral to cause pain, so if X can experience pain it’s immoral to cause X pain, and therefore X deserves moral consideration.
But I agree with you that if the former turns out to be true, then that’s the right question to be asking.
Admittedly, my primary reason for being reluctant to accept that is that I have no idea how to answer that question a priori, so I’d rather not ask it… which of course is more bias than evidence.
So how do you decide whether or not X deserve’s moral consideration based on something like long-term interactions, or looking at its code? I mean, if the real question is “how do I feel about X,” something something explicit.
Are you asking if it can consider you a friend or you can consider it a friend?
There has been a robot designed to love. Due to its simplistic nature, it was a crazy stalker, but nonetheless it can love. Emotion is easy. Intelligence is hard. Is your test just to see if it’s human enough that you feel comfortable calling whatever emotion it feels “love”?
Due to its simplistic nature, it was a crazy stalker, but nonetheless it can love. Emotion is easy.
Why should I attribute emotions to this contraption? Because there’s a number somewhere inside it that the programmer has suggestively called “love”? Because it interacts in ways which are about as similar to love as Eliza is to a conversational partner?
Because it acts in a manner that keeps it and the person of interest near each other.
So does a magnet. So does a homing missile. But a north pole does not love a south pole, and a missile does not love its target. Neither do rivers long to meet the sea, nor does fire long to ascend to heaven, nor do rocks desire the centre of the earth.
Why should I attribute emotions to you?
Because you experience them yourself, and I seem to be the same sort of thing as you are. Without any knowledge of what emotions are, that’s the best one can do.
This does not work for robots at the current state of the art.
True, but we can make robots better than that. The one I mentioned was capable of changing to be like that with the presence of a person. I don’t know much about that particular robot, but we can make ones that will change generally act in a manner that will put themselves in similar situation to the one they’re in at a given time, which is the best way I can define happiness, and we can make them happy when they’re near a specific person.
In any case, there is still a more basic problem. Why do you say that a magnet doesn’t love? I’m not saying that it does to any non-negligible extent, but it would be helpful to have a definition more precise than “do what humans do”.
This does not work for robots at the current state of the art.
Can you give an example of when it possible could work for robots? It sounds like you’re saying that it’s not love unless they’re conscious. While that is a necessary condition to make it an consciousness test, if that’s how you know it’s love than it’s circular. In order to prove it’s conscious it has to prove it can love. In order to prove it can love it must prove that it’s conscious.
Can you give an example of when it possible could work for robots?
No, because I don’t know what emotions are. I don’t believe anyone else does either. Neither does anyone know what consciousness is. Nobody even knows what an answer to the question would look like.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
I can describe the behavior of a magnet without resorting to such things, so I don’t posit them.
That’s not to say that I’m correct to ascribe them to systems with complicated behavior… I might be; I might not be. Merely to say that it’s what I seem to do. It’s what other humans seem to do as well… hence the common tendency to ascribe emotions and personalities to all sorts of complex phenomena.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
You mean like a psudorandom number generator?
Motives are easy to model. You just set what the system optimizes for. The part that’s hard to model is creativity.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
That’s a bad sign. My emotional behavior wouldn’t become fake due to your intelligence.
My own practical version of the Turing test is “can we be friends?” (It used to be “can we fall in love?”) Once an AI passes a test like that I think the question of whether it’s “genuinely” conscious should be dissolved.
Actually, scratch that: either way, I think the question of whether something is “genuinely” conscious should be dissolved.
Mostly I agree with your last sentence.
I mostly think “is X conscious?” can be usefully replaced by a combination of “does X think?” and “does X experience pain/pleasure/etc.?” In both cases, answering the question is a matter of making inferences from incomplete knowledge, and it’s always possible to be wrong, but it’s also usually possible to be legitimately confident. If there’s anything else being asked, I don’t know what it is.
I don’t think that’s dissolving far enough. The questions those questions are stand-ins for, I think, are questions like “does X deserve legal consideration?” or “does X deserve moral consideration?” and we might as well be explicit about this.
I don’t think those questions are mere stand-ins. I think the answers to “does X deserve legal consideration?” or “does X deserve moral consideration?” depend heavily on “Is X conscious?” and “Does X experience pain/pleasure?” That is, if we answer “Is X conscious?” and “Does X experience pain/pleasure?” then we can answer “does X deserve legal consideration?” and “does X deserve moral consideration?”
If “Is X conscious?” and “Does X experience pain/pleasure?” simply stand-ins for “does X deserve legal consideration?” or “does X deserve moral consideration?”, then if we answered the latter two we’d stop caring about the former. I don’t think that’s so. There are still very interesting, very deep scientific questions to be answered about just what it means when we say something is conscious.
The problem is that I, for one, don’t know what the question “Is X conscious?” means and I’m not sure how to judge “Does X experience pain/pleasure?” in a non-biological context either. Nor has anyone else ever convinced me they know the answers to these questions. Still, it does seem as if neurobiology is making slow progress on these questions so they’re probably not intractable or meaningless. When all is said and done, they may not mean exactly what we vaguely feel they mean today; but I suspect that “conscious” will be more like the concept of “atom” than the concept of “ether”. I.e. we’ll recognize a clear connection between the original use of the word and the much more refined and detailed understanding we eventually come to. On the other hand, I could be wrong about that; and consciousness could turn out to be as useless a concept as ether or phlogiston.
Yeah, I waffled about this and ultimately decided not to say that, but I’m not confident.
I’m not really clear on whether what people are really asking is (e.g) “does X deserve moral consideration?,” or whether it just happens to be true that people believe (e.g.) that it’s immoral to cause pain, so if X can experience pain it’s immoral to cause X pain, and therefore X deserves moral consideration.
But I agree with you that if the former turns out to be true, then that’s the right question to be asking.
Admittedly, my primary reason for being reluctant to accept that is that I have no idea how to answer that question a priori, so I’d rather not ask it… which of course is more bias than evidence.
So how do you decide whether or not X deserve’s moral consideration based on something like long-term interactions, or looking at its code? I mean, if the real question is “how do I feel about X,” something something explicit.
Dunno. But I’d rather admit my ignorance about the right question.
Are you asking if it can consider you a friend or you can consider it a friend?
There has been a robot designed to love. Due to its simplistic nature, it was a crazy stalker, but nonetheless it can love. Emotion is easy. Intelligence is hard. Is your test just to see if it’s human enough that you feel comfortable calling whatever emotion it feels “love”?
Why should I attribute emotions to this contraption? Because there’s a number somewhere inside it that the programmer has suggestively called “love”? Because it interacts in ways which are about as similar to love as Eliza is to a conversational partner?
Because it acts in a manner that keeps it and the person of interest near each other.
Why should I attribute emotions to you?
So does a magnet. So does a homing missile. But a north pole does not love a south pole, and a missile does not love its target. Neither do rivers long to meet the sea, nor does fire long to ascend to heaven, nor do rocks desire the centre of the earth.
Because you experience them yourself, and I seem to be the same sort of thing as you are. Without any knowledge of what emotions are, that’s the best one can do.
This does not work for robots at the current state of the art.
True, but we can make robots better than that. The one I mentioned was capable of changing to be like that with the presence of a person. I don’t know much about that particular robot, but we can make ones that will change generally act in a manner that will put themselves in similar situation to the one they’re in at a given time, which is the best way I can define happiness, and we can make them happy when they’re near a specific person.
In any case, there is still a more basic problem. Why do you say that a magnet doesn’t love? I’m not saying that it does to any non-negligible extent, but it would be helpful to have a definition more precise than “do what humans do”.
Can you give an example of when it possible could work for robots? It sounds like you’re saying that it’s not love unless they’re conscious. While that is a necessary condition to make it an consciousness test, if that’s how you know it’s love than it’s circular. In order to prove it’s conscious it has to prove it can love. In order to prove it can love it must prove that it’s conscious.
No, because I don’t know what emotions are. I don’t believe anyone else does either. Neither does anyone know what consciousness is. Nobody even knows what an answer to the question would look like.
I seem to ascribe emotions to a system—more generally, I ascribe cognitive states, motives, and an internal mental life to a system—when its behavior is too complicated for me to account for with models that don’t include such things.
I can describe the behavior of a magnet without resorting to such things, so I don’t posit them.
That’s not to say that I’m correct to ascribe them to systems with complicated behavior… I might be; I might not be. Merely to say that it’s what I seem to do. It’s what other humans seem to do as well… hence the common tendency to ascribe emotions and personalities to all sorts of complex phenomena.
If I were somehow made smart enough to fully describe your behavior without recourse to what Dennett calls the intentional stance, I suspect I would start to experience your emotional behavior as “fake” somehow.
This isn’t quite a fully baked idea yet, but personlike agents are so ubiquitous in human modeling of complex systems that I suspect they’re a default of some kind—and that this doesn’t necessarily indicate a lack of deep understanding of a system’s behavior. Programmers often talk about software they’re working on in agent-like terms—the component remembers this, knows about that, has such-and-such a purpose in life—but this doesn’t correlate with imperfect understanding of the software; it’s just a convenient way of thinking about the problem. Likewise for people—I’m not a psychologist or a neuroscientist, but I doubt people in those professions think of their fellows’ emotions as less real for understanding them better than I do.
(The main alternative for complex systems modeling seems to be thinking of systems as an extension of the self or another agent, which seems to crop up mostly for systems tightly controlled by those agents. Cars are a good example—I don’t say “where is my car parked?”, I say “where am I parked?”.)
See also
You mean like a psudorandom number generator?
Motives are easy to model. You just set what the system optimizes for. The part that’s hard to model is creativity.
That’s a bad sign. My emotional behavior wouldn’t become fake due to your intelligence.
If I can consider it a friend. I also think “is whatever this robot experiencing genuinely love?” should be dissolved.