I don’t know how we could possibly know if the AI is aping consciousness for it own ends or if it actually is conscious.
How do you answer that question for human beings? ;-)
How do you know if a tree falling in the forest is just aping a sound for its own ends, or if it actually makes a sound? ;-)
Thinking about p-zombies is a sure sign that you are confused, and projecting a property of your brain onto the outside world. If you think there can be a p-zombie, then the mysterious thing called “consciousness” must reside in your map, not in the territory.
More specifically, if you can imagine robot or person A “with” consciousness, and robot or person B “without” it, and meanwhile imagine them to be otherwise “identical”, then the only place for the representation of the idea that there is a difference, is in the mental tags you label these imaginary robots or people with.
This is why certain ideas seem “supernatural” or “beyond the physical”—they are actually concepts that exist only in the mind of the observer, rather than in the physical reality.
This isn’t to say that there might someday be a reductionist definition of consciousness that refers to facts in the territory, just that the intuitive idea of consciousness is actually a bit that gets flipped in our brains: an inbuilt bit that’s part of our brain’s machinery for telling the difference between animals, plants, and inanimate objects. This bit is the true home of our intuitive idea of consciousness, and is largely unrelated to whatever actual consciousness is.
In young children, after all, this bit flips on for anything that appears to move or speak by itself: animals, puppets, cartoons, etc. We can learn to switch it on or off for specific things, of course, but it’s an inbuilt function that has its own sense or feel of being “real”.
It’s also why we can imagine the same thing having or not-having consciousness, because to our brain, it is an independent variable, because we have separate wiring to track that variable independently. But the mere existence of a tracking variable in our heads, doesn’t mean the thing it tracks has a real existence. (We can see “colors” that don’t really exist in the visible spectrum, after all: many colors are simply our perception of what happens when something reflects light at more than one wavelength.)
I suggest that if you are still confused about this, consult the sequence on the meaning of words, and particularly the bits about “How an algorithm feels from the inside”—if I recall correctly, that’s the bit that shows how our neural nets can represent conceptual properties like “makes a sound” as if they were independent of the physical phenomena involved. The exact same concept applies to our projections of consciousness, and the confusion of p-zombies.
How do you answer that question for human beings? ;-)
I don’t but I assume that they do, because other humans and I share a common origin. I know I am conscious, and I would consider it strange if natural selection made me conscious, but not individuals that are basically genetically identical to me.
I suggest that if you are still confused about this, consult the sequence on the meaning of words, and particularly the bits about “How an algorithm feels from the inside”
I will review that sequence again. I certainly am confused. However, I don’t think that consciousness is an attribute of my map but not the territory (though I freely admit that I’m having trouble expressing how that could be). I’m going to do my best here.
Can one rightly say that something exists but it interacts with nothing or is this contrary to what it means to exist? (Human consciousness in not one of these things, for indeed I have some idea of what consciousness is and I can say the words “I am conscious”, but think the question is relevant.) If such things exist, are they in the map of the territory?
I intend to flesh this out further, but for the moment, I’ll repeat what I said to Toggle, below:
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps, you’re right. I’m still working around this, and maybe my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is identical to saving every life. That way lies solipsism.
What difference does it make to your utility if the relationship is with an empty chasm shaped like a person?
How do you know that you’re not an empty chasm shaped like a person? Is it merely your belief that makes this so? Well then, any being we make that likewise believes this will also be conscious by this definition.
The essential confusion that you’re making is that you think you have a coherent definition of “consciousness”, but you don’t.
How do you know you’re not a p-zombie? You merely believe this is not the case.
But why?
Because your brain has—figuratively speaking—a neuron that means “this thing is an entity with goals and motivations”. Not as an abstract concept, but as an almost sensory quality. There is, in fact, a “qualia” (qualion?) for “that thing is an intentional creature”, triggered initially by observation of things that appear to move on their own. Call it the anthopomorphic qualion, or AQ for short.
The AQ is what makes it feel like p-zombies are a coherent concept. Because it exists, you can imagine otherwise-identical sensory inputs, but with the AQ switched on or off. When you make reference to an “empty chasm shaped like a person”, you are describing receiving sensory inputs that match “person”, but with your AQ not firing.
Compare this with e.g. Capgras syndrome, in which a person is utterly convinced that all their friends and relatives have been replaced by impostor-duplicates. These dastardly villains look and sound exactly the same as their real friends and relatives, but are clearly impostors.
This problem occurs because we have figurative neurons or qualia associated with recognizing specific people. So, if you have a friend Joe, there’s a Joe Qualion (JQ) that represents the “experience of seeing Joe”. In people with Capgras syndrome, there is damage to the parts of the brain that recognize faces, causing the person to see Joe, and yet not experience the Joe Qualion.
Thus, even though he or she sees a person that they will readily admit “looks just like Joe”, they will nonetheless insist, “That’s not Joe. I’ve known Joe my whole life, and that is not Joe.”
Now, even if you don’t have Capgras syndrome, you can still imagine what this would be like, to have Joe replaced by an exact duplicate. In your mind, you can picture Joe, but without the Joe Qualion.
And this is the exact same thing you’re doing when you imagine p-zombies. (i.e., making stuff up!)
In both cases, however, what you’re doing is 100% part of your map. You can imagine “looks like a person, but is empty inside”, just like you can imagine “duplicate impostor of Joe”.
This does not mean, however, that it’s possible to duplicate Joe, any more than it implies you can have something that’s identical to a person but not conscious.
Now, you may say, “But it doesn’t rule it out, either!”
No, but then we can imagine the moon made of green cheese, or some other physical impossibility, or God making a stone so heavy he can’t lift it, and these imaginary excursions into incoherence are not any more evidence for their respective propositions than the p-zombie thought experiment is.
Because the only evidence we have for the proposal that p-zombies can exist, is imaginary. What’s more, it rests solely on the AQ. That is, our ability to anthopomorphize. And we do not have any credible evidence that our own perception of personal consciousness isn’t just self-applied anthropomorphism.
The thing that makes p-zombies so attractive, though, is that this imaginary evidence seems intuitively compelling. However, just as with the “tree falling in the forest” conundrum, it rests on a quirk of neuroanatomy. Specifically, that we can have qualia about whether something “is” a thing, that are independent of all the sensory inputs that we use to decide whether the thing “is” that thing in the first place.
Thus, we can imagine a tree without a sound, a Joe that isn’t Joe, and a p-zombie without consciousness. But our ability to imagine these things only exists because our anatomy contains separate representations of the ideas. They are labels in our mind, that we can turn on and off, and then puzzle over philosophical arguments about whether the tree “really” makes a sound or not. But the labels themselves are part of the map, not the territory, because they are in the brain of the arguer.
Summary: we can imagine all kinds of stupid sh** (like trees falling in the forest without making a sound), because our brain’s maps represent “is”-ness as distinct qualia from the qualia that are used to determine the is-ness in the first place. A surprising number of philosophical quandaries and paradoxes arise from this phenomenon, and are no longer confusing as soon as you realize that “is-ness” is actually superfluous, existing as it does only in the map, not the territory. Rationalist taboo and E-prime are two techniques for resolving this confusion in a given context.
How do you answer that question for human beings? ;-)
How do you know if a tree falling in the forest is just aping a sound for its own ends, or if it actually makes a sound? ;-)
Thinking about p-zombies is a sure sign that you are confused, and projecting a property of your brain onto the outside world. If you think there can be a p-zombie, then the mysterious thing called “consciousness” must reside in your map, not in the territory.
More specifically, if you can imagine robot or person A “with” consciousness, and robot or person B “without” it, and meanwhile imagine them to be otherwise “identical”, then the only place for the representation of the idea that there is a difference, is in the mental tags you label these imaginary robots or people with.
This is why certain ideas seem “supernatural” or “beyond the physical”—they are actually concepts that exist only in the mind of the observer, rather than in the physical reality.
This isn’t to say that there might someday be a reductionist definition of consciousness that refers to facts in the territory, just that the intuitive idea of consciousness is actually a bit that gets flipped in our brains: an inbuilt bit that’s part of our brain’s machinery for telling the difference between animals, plants, and inanimate objects. This bit is the true home of our intuitive idea of consciousness, and is largely unrelated to whatever actual consciousness is.
In young children, after all, this bit flips on for anything that appears to move or speak by itself: animals, puppets, cartoons, etc. We can learn to switch it on or off for specific things, of course, but it’s an inbuilt function that has its own sense or feel of being “real”.
It’s also why we can imagine the same thing having or not-having consciousness, because to our brain, it is an independent variable, because we have separate wiring to track that variable independently. But the mere existence of a tracking variable in our heads, doesn’t mean the thing it tracks has a real existence. (We can see “colors” that don’t really exist in the visible spectrum, after all: many colors are simply our perception of what happens when something reflects light at more than one wavelength.)
I suggest that if you are still confused about this, consult the sequence on the meaning of words, and particularly the bits about “How an algorithm feels from the inside”—if I recall correctly, that’s the bit that shows how our neural nets can represent conceptual properties like “makes a sound” as if they were independent of the physical phenomena involved. The exact same concept applies to our projections of consciousness, and the confusion of p-zombies.
I don’t but I assume that they do, because other humans and I share a common origin. I know I am conscious, and I would consider it strange if natural selection made me conscious, but not individuals that are basically genetically identical to me.
I will review that sequence again. I certainly am confused. However, I don’t think that consciousness is an attribute of my map but not the territory (though I freely admit that I’m having trouble expressing how that could be). I’m going to do my best here.
Can one rightly say that something exists but it interacts with nothing or is this contrary to what it means to exist? (Human consciousness in not one of these things, for indeed I have some idea of what consciousness is and I can say the words “I am conscious”, but think the question is relevant.) If such things exist, are they in the map of the territory?
I intend to flesh this out further, but for the moment, I’ll repeat what I said to Toggle, below:
I saw a bit of the movie her about the love affair between a guy and his operating system. It was horrifying to me, but I think for a different reason than everyone else in the room. I was thinking, “he might be falling in love with an automaton. How do we know if he is in a relationship with another mind or just an unthinking mechanism of gears and levers that looks like another mind from the outside?” The idea of being emotionally invested in an emotional void, bothers me. I want my relationships to be with other minds.
Some here see this as a meaningless distinction. The being acts the same as a mind, so for all intents and purposes it is a mind. What difference does it make to your utility if the relationship is with an empty chasm shaped like a person? The input-output is the same.
Perhaps, you’re right. I’m still working around this, and maybe my discomfort is a facet of a outdated worldview. I’ll note however, that this reduces charity to fuzzy-seeking. It doesn’t make any difference if you actually help as long as you feel like you help. If presented the choice, saving a life is identical to saving every life. That way lies solipsism.
How do you know that? Break it down!
How do you know that you’re not an empty chasm shaped like a person? Is it merely your belief that makes this so? Well then, any being we make that likewise believes this will also be conscious by this definition.
The essential confusion that you’re making is that you think you have a coherent definition of “consciousness”, but you don’t.
How do you know you’re not a p-zombie? You merely believe this is not the case.
But why?
Because your brain has—figuratively speaking—a neuron that means “this thing is an entity with goals and motivations”. Not as an abstract concept, but as an almost sensory quality. There is, in fact, a “qualia” (qualion?) for “that thing is an intentional creature”, triggered initially by observation of things that appear to move on their own. Call it the anthopomorphic qualion, or AQ for short.
The AQ is what makes it feel like p-zombies are a coherent concept. Because it exists, you can imagine otherwise-identical sensory inputs, but with the AQ switched on or off. When you make reference to an “empty chasm shaped like a person”, you are describing receiving sensory inputs that match “person”, but with your AQ not firing.
Compare this with e.g. Capgras syndrome, in which a person is utterly convinced that all their friends and relatives have been replaced by impostor-duplicates. These dastardly villains look and sound exactly the same as their real friends and relatives, but are clearly impostors.
This problem occurs because we have figurative neurons or qualia associated with recognizing specific people. So, if you have a friend Joe, there’s a Joe Qualion (JQ) that represents the “experience of seeing Joe”. In people with Capgras syndrome, there is damage to the parts of the brain that recognize faces, causing the person to see Joe, and yet not experience the Joe Qualion.
Thus, even though he or she sees a person that they will readily admit “looks just like Joe”, they will nonetheless insist, “That’s not Joe. I’ve known Joe my whole life, and that is not Joe.”
Now, even if you don’t have Capgras syndrome, you can still imagine what this would be like, to have Joe replaced by an exact duplicate. In your mind, you can picture Joe, but without the Joe Qualion.
And this is the exact same thing you’re doing when you imagine p-zombies. (i.e., making stuff up!)
In both cases, however, what you’re doing is 100% part of your map. You can imagine “looks like a person, but is empty inside”, just like you can imagine “duplicate impostor of Joe”.
This does not mean, however, that it’s possible to duplicate Joe, any more than it implies you can have something that’s identical to a person but not conscious.
Now, you may say, “But it doesn’t rule it out, either!”
No, but then we can imagine the moon made of green cheese, or some other physical impossibility, or God making a stone so heavy he can’t lift it, and these imaginary excursions into incoherence are not any more evidence for their respective propositions than the p-zombie thought experiment is.
Because the only evidence we have for the proposal that p-zombies can exist, is imaginary. What’s more, it rests solely on the AQ. That is, our ability to anthopomorphize. And we do not have any credible evidence that our own perception of personal consciousness isn’t just self-applied anthropomorphism.
The thing that makes p-zombies so attractive, though, is that this imaginary evidence seems intuitively compelling. However, just as with the “tree falling in the forest” conundrum, it rests on a quirk of neuroanatomy. Specifically, that we can have qualia about whether something “is” a thing, that are independent of all the sensory inputs that we use to decide whether the thing “is” that thing in the first place.
Thus, we can imagine a tree without a sound, a Joe that isn’t Joe, and a p-zombie without consciousness. But our ability to imagine these things only exists because our anatomy contains separate representations of the ideas. They are labels in our mind, that we can turn on and off, and then puzzle over philosophical arguments about whether the tree “really” makes a sound or not. But the labels themselves are part of the map, not the territory, because they are in the brain of the arguer.
Summary: we can imagine all kinds of stupid sh** (like trees falling in the forest without making a sound), because our brain’s maps represent “is”-ness as distinct qualia from the qualia that are used to determine the is-ness in the first place. A surprising number of philosophical quandaries and paradoxes arise from this phenomenon, and are no longer confusing as soon as you realize that “is-ness” is actually superfluous, existing as it does only in the map, not the territory. Rationalist taboo and E-prime are two techniques for resolving this confusion in a given context.