That’s a fine definition, but if everyone thought that, there would be no place for arguments about whether it’s possible for zombies (let alone p-zombies) to exist. It doesn’t seem to me that people see consciousness as meaning merely self-modeling.
I think consensus here is that the idea of P Zombies is silly.
Certainly. But is the idea of ordinary zombies also silly? That’s what your definition implies.
ETA: not that I’m against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by “consciousness”, something that would allow for zombies.
If you define “consciousness” in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt’s vampires from Blindsight are zombies.
ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious—but is capable of all the behavior that humans are capable of.
(My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)
This is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference.
I don’t see any reason why an unconscious human-equivalent couldn’t erroneously claim to be conscious, any more than an unconscious computer could print out the sentence “I am conscious.”
You’re right. It’s what I meant, but I see that my explanation came out wrong. I’ll fix it.
I don’t see any reason why an unconscious human-equivalent couldn’t erroneously claim to be conscious
That’s true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.
My question is: what is being conscious defined to mean? If it’s a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a “pure subjective” experience or quale.
If a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence.
Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says “it tastes just like chicken”. B says, “No, it tastes like turkey.” A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that’s a long way from being mistaken about whether one has qualia at all—but to rule that possibility in or out, we have to make some verbal choices clarifying what “qualia” will mean.
Roughly speaking, I see at least two alternatives for understanding “qualia”. One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say “this kind of thing”. That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That’s no guarantee that internal combustion occurs within it.)
A second approach would be to define “qualia” by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to “executive function”, which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of “qualia” zombies might be impossible—I’m not sure.
But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.
Well, the claim would be objectively incorrect; I’m not sure it’s meaningful to say that the zombie would be wrong.
My question is: what is being conscious defined to mean? If it’s a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a “pure subjective” experience or quale.
As others have commented, it’s having the capacity to model oneself and one’s perceptions of the world. If p-zombies are impossible, which they are, there are no “pure subjective” experiences: any entity’s subjective experience corresponds to some objective feature of its brain or programming.
it’s having the capacity to model oneself and one’s perceptions of the world.
That’s not the definition that seems to be used in many of the discussions about consciousness. For instance, the term “Hard Problem of Consciousness” isn’t talking about self-modeling.
Let’s take the discussion about p-zombies as an example. P-zombies are physically identical to normal humans, so they (that is, their brains) clearly model themselves and their own perceptions of the world. Then the claim that they are unconscious is in direct contradiction to the definition of consciousness.
If proving that p-zombies are logically impossible was as simple as pointing this out, the whole debate wouldn’t exist.
Beyond that example, I’ve gone through all LW posts that have “conscious” in their title:
I’m saying that “But you haven’t explained consciousness!” doesn’t reasonably seem like the responsibility of physicists, or an objection to a theory of fundamental physics.
And then he says:
however consciousness turns out to work, getting infected with virus X97 eventually causes your experience of dripping green slime.
I read that as using ‘consciousness’ to mean experience in the sense of subjective qualia.
Framing Consciousness. cousin_it has retracted the post, but apparently not for reasons relevant to us here. It talks about “conscious/subjective experiences”, and asks whether consciousness can be implemented on a Turing machine. Again, it’s clear that a system that recursively models itself can be implemented on a TM, so that can’t be what’s being discussed.
If p-zombies are impossible, which they are, there are no “pure subjective” experiences: any entity’s subjective experience corresponds to some objective feature of its brain or programming.
The reason “subjective experience” is called subjective is that it’s presumed not to be part of the objective, material world. That definition is dated now, of course.
I don’t want to turn this thread into a discussion of what consciousness is, or what subjective experience is. That’s a discussion I’d be very interested in, but it should be separate. My original question was, what do people mean by “consciousness”? If I understood you correctly, that to you it simply means self-modeling systems, then I was right to think different people use the C-word to mean quite different things, even just here on LW.
Lets say you’re having a subjective experience. Say, being stung by a wasp. How do you know? Right. You have to have to be a ware of yourself, and your skin, and have pain receptors, and blah blah blah.
But if you couldn’t feel the pain, let’s say because you were numb, you would still feel conscious. And if you were infected with a virus that made a wasp sting feel sugary and purple, rather than itchy and painful, you would also still be conscious.
It’s only when you don’t have a model of yourself that consciousness becomes impossible.
It’s only when you don’t have a model of yourself that consciousness becomes impossible.
That doesn’t mean they’re the same thing. Unless you define them to mean the same thing. But as I described above, not everyone does that. There is no “Hard Problem of Modeling Yourself”.
ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious—but is capable of all the behavior that humans are capable of.
Where the heck is this terminology coming from? As I learned it the ‘philosophical’ in “philosophical zombie” is just there to distinguish it from Romero-imagined brain-eating undead.
Yes, but we need some other term for “unconscious human-like entity”. I read one paper that used the terms “p-zombie” and “b-zombie”, where the p stood for “physical” as well as “philosophical” and the b stood for “behavioral”.
I’d rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie.
(But yes they’re just words. Thanks for clarifying.)
P-zombies aren’t just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.
My point is that it isn’t regarded in general as a a reducio. Indeed, it actually was originally constructed as an argument against physicalism. I see it as a reducio also or even more to the point as an indication of how much into a corner dualism has been pushed by science. The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.
Nagel and Chalmers all seem to think it is a strong argument. Kirk used to think it was but since then has gone about Pi radians on that. My impression is that Block also sees it as a strong argument but I haven’t actually read anything by Block. That’s the impression I get from seeing Block mentioned in passing.
Yeah, that wording may be too strong, although the impression I get certainly is that Kirk was convinced it was a slam dunk for quite some time. Kirk’s book “Zombies and Consciousness” (which I’ve only read parts of) seems to describe him as having once considered to be pretty close to a slamdunk. But yeah, my wording was probably too strong.
It’s just really easy to take the explicit, “this guy takes it seriously” and make the implicit connection, “and this is totally not a silly idea at all.”
I think consensus here is that the idea of P Zombies is silly.
Certainly. But is the idea of ordinary zombies also silly? That’s what your definition implies.
ETA: not that I’m against that conclusion. It would make things so much simpler :-) I just have the experience that many people mean something else by “consciousness”, something that would allow for zombies.
What’s the difference?
If you define “consciousness” in a way that allows for unconscious but intelligent, even human-equivalent agents, then those are called zombies. Aliens or AIs might well turn out to be zombies. Peter Watt’s vampires from Blindsight are zombies.
ETA: a p-zombie is physically identical to a conscious human, but is still unconscious. (And we agree that makes no sense). A zombie is physically different from a conscious human, and as a result is unconscious—but is capable of all the behavior that humans are capable of.
(My original comment was wrong (thanks Blueberry!) and said: The difference between a zombie and a p-zombie is that p-zombies claim to be conscious, while zombies neither claim nor believe to be conscious.)
This is very different from my understanding of the definition of those terms, which is that p-zombies are physically identical to a conscious human, and a zombie is an unconscious human-equivalent with a physical, neurological difference.
I don’t see any reason why an unconscious human-equivalent couldn’t erroneously claim to be conscious, any more than an unconscious computer could print out the sentence “I am conscious.”
You’re right. It’s what I meant, but I see that my explanation came out wrong. I’ll fix it.
That’s true. But the fact of the matter would be that such a zombie would be objectively wrong in its claim to be conscious.
My question is: what is being conscious defined to mean? If it’s a property that is objectively present or not present and that you can be wrong about in this way, then it must be something more than a “pure subjective” experience or quale.
If a subjective experience is the same event, differently described, as a neural process, you can be wrong about whether you are having it. You can also be wrong about whether you and another being share the same or similar quale, especially if you infer such similarity solely from behavioral evidence.
Even aside from physical-side-of-the-same-coin considerations, a person can be mistaken about subjective experience. A tries the new soup at the restaurant and says “it tastes just like chicken”. B says, “No, it tastes like turkey.” A accepts the correction (and not just that it tastes like turkey to B). The plausibility of this scenario shows that we can be mistaken about qualia. Now, admittedly, that’s a long way from being mistaken about whether one has qualia at all—but to rule that possibility in or out, we have to make some verbal choices clarifying what “qualia” will mean.
Roughly speaking, I see at least two alternatives for understanding “qualia”. One would be to trot out a laundry list of human subjective feels: color sensations, pain, pleasure, tastes, etc., and then say “this kind of thing”. That leaves the possibility of zombies wide open, since intelligent behavior is no guarantee of a particular familiar mental mechanism causing that behavior. (Compare: I see a car driving down the road, doing all the things an internal combustion engine-powered vehicle can do. That’s no guarantee that internal combustion occurs within it.)
A second approach would be to define “qualia” by its role in the cognitive economy. Very roughly speaking, qualia are properties highly accessible to “executive function”, which properties go beyond (are individuated more finely than by) their roles in representing, for the cognizer, the objective world. On this understanding of “qualia” zombies might be impossible—I’m not sure.
Well, the claim would be objectively incorrect; I’m not sure it’s meaningful to say that the zombie would be wrong.
As others have commented, it’s having the capacity to model oneself and one’s perceptions of the world. If p-zombies are impossible, which they are, there are no “pure subjective” experiences: any entity’s subjective experience corresponds to some objective feature of its brain or programming.
That’s not the definition that seems to be used in many of the discussions about consciousness. For instance, the term “Hard Problem of Consciousness” isn’t talking about self-modeling.
Let’s take the discussion about p-zombies as an example. P-zombies are physically identical to normal humans, so they (that is, their brains) clearly model themselves and their own perceptions of the world. Then the claim that they are unconscious is in direct contradiction to the definition of consciousness.
If proving that p-zombies are logically impossible was as simple as pointing this out, the whole debate wouldn’t exist.
Beyond that example, I’ve gone through all LW posts that have “conscious” in their title:
The Conscious Sorites Paradox, part of Eliezer’s series on quantum physics. He says:
And then he says:
I read that as using ‘consciousness’ to mean experience in the sense of subjective qualia.
Framing Consciousness. cousin_it has retracted the post, but apparently not for reasons relevant to us here. It talks about “conscious/subjective experiences”, and asks whether consciousness can be implemented on a Turing machine. Again, it’s clear that a system that recursively models itself can be implemented on a TM, so that can’t be what’s being discussed.
MWI, weird quantum experiments and future-directed continuity of conscious experience. Clearly uses “consciousness” to mean “subjective experience”.
Consciousness. Ditto.
Outline of a lower bound for consciousness. I don’t understand this post at first sight—would have to read it more throughly...
The reason “subjective experience” is called subjective is that it’s presumed not to be part of the objective, material world. That definition is dated now, of course.
I don’t want to turn this thread into a discussion of what consciousness is, or what subjective experience is. That’s a discussion I’d be very interested in, but it should be separate. My original question was, what do people mean by “consciousness”? If I understood you correctly, that to you it simply means self-modeling systems, then I was right to think different people use the C-word to mean quite different things, even just here on LW.
Lets say you’re having a subjective experience. Say, being stung by a wasp. How do you know? Right. You have to have to be a ware of yourself, and your skin, and have pain receptors, and blah blah blah.
But if you couldn’t feel the pain, let’s say because you were numb, you would still feel conscious. And if you were infected with a virus that made a wasp sting feel sugary and purple, rather than itchy and painful, you would also still be conscious.
It’s only when you don’t have a model of yourself that consciousness becomes impossible.
That doesn’t mean they’re the same thing. Unless you define them to mean the same thing. But as I described above, not everyone does that. There is no “Hard Problem of Modeling Yourself”.
Where the heck is this terminology coming from? As I learned it the ‘philosophical’ in “philosophical zombie” is just there to distinguish it from Romero-imagined brain-eating undead.
Yes, but we need some other term for “unconscious human-like entity”. I read one paper that used the terms “p-zombie” and “b-zombie”, where the p stood for “physical” as well as “philosophical” and the b stood for “behavioral”.
I’d rather call the first an n-zombie (meaning neurologically identical to a human). And, yeah, lets use b-zombie instead of zombie as all of these are varieties of philosophical zombie.
(But yes they’re just words. Thanks for clarifying.)
P-zombies can write philosophical papers on p-zombies.
Oh, P Zombies are just the reductio ad absurdum version? Yeah, I don’t believe in Zombies.
P-zombies aren’t just reducio ad absurda although most of LW does consider them to be. David Chalmers, who is a very respected philosopher takes the idea quite seriously as do a surprisingly large number of other philosophers.
Please explain to me how it is not.
You can’t just say, “This smart guy takes this very seriously.” Aristotle took a lot of things very seriously that turned out to be nonsense.
‘Zombie Review’ provides some background here...
My point is that it isn’t regarded in general as a a reducio. Indeed, it actually was originally constructed as an argument against physicalism. I see it as a reducio also or even more to the point as an indication of how much into a corner dualism has been pushed by science. The really scary thing is that some philosophers seem to think that P-zombies are a slam-dunk argument for dualism.
Who?
Nagel and Chalmers all seem to think it is a strong argument. Kirk used to think it was but since then has gone about Pi radians on that. My impression is that Block also sees it as a strong argument but I haven’t actually read anything by Block. That’s the impression I get from seeing Block mentioned in passing.
Thinking it’s a strong argument is, of course, still a long way from thinking it’s a “slam dunk” (nobody that I’m aware of thinks that).
Yeah, that wording may be too strong, although the impression I get certainly is that Kirk was convinced it was a slam dunk for quite some time. Kirk’s book “Zombies and Consciousness” (which I’ve only read parts of) seems to describe him as having once considered to be pretty close to a slamdunk. But yeah, my wording was probably too strong.
Okay, I agree.
It’s just really easy to take the explicit, “this guy takes it seriously” and make the implicit connection, “and this is totally not a silly idea at all.”