Why is it that philosophical zombies are unlikely to exist? In Eliezer’s article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Also, what are LessWrong’s views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?
In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Nope, your “in other words” summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.
This is mostly just arguing over semantics. Just replace “philosophical zombie” with whatever your preferred term is for a physical human who lacks any qualia.
If an argument is about semantics, this is not a good response. That is...
Just replace “philosophical zombie” with whatever your preferred term is for
An important part of normal human conversations is error correction. Suppose I say “three, as an even number, …”; the typical thing to do is to silently think “probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen.” But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it’s worth flagging the discrepancy and raising it.
Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that “three is even,” using that term to also refer to the belief that “three is odd” will be the source of no end of confusion. (“Threevenism is false!” “What do you mean? Of course Threevenism is true.”) So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.
This variability suggests it’s likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because “only the qualia of being tortured is bad,” because it seems to me like torturing humans is likely bad for different reasons.
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him.
Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.
Also, what are LessWrong’s views on the idea of a continuous consciousness?
It’s kind of against the moderation guidelines of “Make personal statements instead of statements that try to represent a group consensus” for anyone to try to answer that question hahah =P
But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn’t know.
An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.
P-zombies are indeed all about epiphenomenalism. Go check out David Chalmers’ exposition for the standard usage. I think the problem with epiphenominalism is that it’s treating ignorance as a positive license to intoduce its epiphenomenal essence.
We know that the brain in your body does all sorts of computational work, and does things that function like memory, and planning, and perception, and being affected by emotions. We might even use a little poetic language and say that there is “someone home” in your body—that it’s convenient and natural to treat this body as a person with mental attributes. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can’t prove we don’t!
When it comes to denying qualia, I think Dennett would bring up the anecdote about magic from Lee Siegel:
“I’m writing a book on magic”, I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and supernatural powers. “No”, I answer: “Conjuring tricks, not real magic”. Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.”
Dennett thinks peoples’ expectations are that “real qualia” are the things that live in the space of epiphenomenal essences and can’t possibly be the equivalent of a conjuring trick.
. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can’t prove we don’t!
It has virtually nothing to do with personal identity.
Dennett thinks peoples’ expectations are that “real qualia” are the things that live in the space of epiphenomenal essences and can’t possibly be the equivalent of a conjuring trick.
If they are a trick,. no one has explained how it is pulled off.
Zombie Dennett: which is more likely? That philosophers could interpret the same type of experience in fundamentally different ways, or that Dennett has some neurological defect which has removed his qualia but not his ability to sense and process sensory information?
Consciousness continuity: I know I’m a computationalist and [causalist?], and I am weakly confident that most LWers share at least one of these beliefs. (Speaking for others is discouraged here, so I doubt you’ll be able to get more than a poll of beliefs, or possibly a link to a previous poll.)
Definitions of terms: computationalism is the view that cognition, identity, etc. are all computations or properties of computations. Causalist is a word I made up to describe the view that continuity is just a special form of causation, and that all computation-preserving forms of causation preserve identity as well. (That is, I don’t see it as fundamentally different if the causation from one subjective moment to the next is due to the usual evolution of brains over time or due to somebody scanning me and sending the information to a nanofactory, so long as the information that makes me up isn’t lost in this process.)
Why is it that philosophical zombies are unlikely to exist? In Eliezer’s article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Also, what are LessWrong’s views on the idea of a continuous consciousness? CGPGrey brought up this issue in The Trouble with Transporters. Does a continuous self exist at all, or is our perception of being a continuous conscious entity existing throughout time just an illusion?
Nope, your “in other words” summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.
This is mostly just arguing over semantics. Just replace “philosophical zombie” with whatever your preferred term is for a physical human who lacks any qualia.
If an argument is about semantics, this is not a good response. That is...
An important part of normal human conversations is error correction. Suppose I say “three, as an even number, …”; the typical thing to do is to silently think “probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen.” But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it’s worth flagging the discrepancy and raising it.
Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that “three is even,” using that term to also refer to the belief that “three is odd” will be the source of no end of confusion. (“Threevenism is false!” “What do you mean? Of course Threevenism is true.”) So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!
That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience ‘qualia’ and other people don’t. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn’t, then the standard argument doesn’t go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.
This variability suggests it’s likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because “only the qualia of being tortured is bad,” because it seems to me like torturing humans is likely bad for different reasons.
Suppose you made a human-level AI. Suppose there was some doubt about whether it was genuinely conscious. Wouldn’t that amount to the question of whether or not it was a zombie?
Or it’s terminological confusion.
No. There are a few places this doubt could be localized, but it won’t be in ‘whether or not zombies are possible.’ By definition we can’t get physical evidence about whether or not it’s a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes “what it is like to be a non-zombie,” and zombies don’t), in exactly the same way we can’t get physical evidence about whether or not we’re zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.
The doubt will likely be localized in ‘what it means to be conscious’ or ‘how to measure whether or not something is conscious’ or ‘how to manufacture consciousness’, where one hopes that answers to one question inform the others.
Perhaps instead the doubt is localized in ‘what decisions are motivated by facts about consciousness.’ If there is ‘something it’s like to be Alexa,’ what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don’t in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.
It’s kind of against the moderation guidelines of “Make personal statements instead of statements that try to represent a group consensus” for anyone to try to answer that question hahah =P
But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn’t know.
An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.
P-zombies are indeed all about epiphenomenalism. Go check out David Chalmers’ exposition for the standard usage. I think the problem with epiphenominalism is that it’s treating ignorance as a positive license to intoduce its epiphenomenal essence.
We know that the brain in your body does all sorts of computational work, and does things that function like memory, and planning, and perception, and being affected by emotions. We might even use a little poetic language and say that there is “someone home” in your body—that it’s convenient and natural to treat this body as a person with mental attributes. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can’t prove we don’t!
When it comes to denying qualia, I think Dennett would bring up the anecdote about magic from Lee Siegel:
“I’m writing a book on magic”, I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and supernatural powers. “No”, I answer: “Conjuring tricks, not real magic”. Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.”
Dennett thinks peoples’ expectations are that “real qualia” are the things that live in the space of epiphenomenal essences and can’t possibly be the equivalent of a conjuring trick.
No, they are primarily about explanation.
It has virtually nothing to do with personal identity.
If they are a trick,. no one has explained how it is pulled off.
Zombie Dennett: which is more likely? That philosophers could interpret the same type of experience in fundamentally different ways, or that Dennett has some neurological defect which has removed his qualia but not his ability to sense and process sensory information?
Consciousness continuity: I know I’m a computationalist and [causalist?], and I am weakly confident that most LWers share at least one of these beliefs. (Speaking for others is discouraged here, so I doubt you’ll be able to get more than a poll of beliefs, or possibly a link to a previous poll.)
Definitions of terms: computationalism is the view that cognition, identity, etc. are all computations or properties of computations. Causalist is a word I made up to describe the view that continuity is just a special form of causation, and that all computation-preserving forms of causation preserve identity as well. (That is, I don’t see it as fundamentally different if the causation from one subjective moment to the next is due to the usual evolution of brains over time or due to somebody scanning me and sending the information to a nanofactory, so long as the information that makes me up isn’t lost in this process.)