You have misunderstood the argument completely. You say “I know I’m speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.” Melodrama, this, but I would advise focusing on the first part of the phrase (“But based on my limited experience....”) if you want to make progress.
The main point of the zombie argument is that if science is so completely helpless that it can say nothing—even in principle—about the subjective phenomenology of consciousness (and by widespread consensus, this appears to be the case), then the possibility of a parallel universe in which that particular aspect is missing (i.e. the Zombie universe) cannot be ruled out. This Can’t-Rule-It-Out aspect is what Chalmers is deploying.
He is NOT saying that we should believe in a parallel zombie universe (a common misunderstandinga among amateur philosophers), he is saying that IF science decides to do a certain kind of washing-its-hands on the whole phenomenology of consciousness idea THEN it follows that philosophers can declare that it is logically possible for there to be a parallel universe in which the thing is missing. It is that logical entailment that is being exploited as a way to come to a particular conclusion about the nature of consciousness.
Specifically, Chalmers then goes on to say that the very nature of subjective phenomenology is that we have privileged access to it, and we are able to assert its existence in some way. It is the conflict between privileged access and logical possibility of absence, that drives the various zombie arguments.
But notice what I said about science washing its hands. If science declares that there really is absolutely nothing it can say about pure subjective phenomenology, science cannot then try to have its cake and eat it too. Science (or rather you, with remarks like “I think I speak for all reductionists when I say Huh?”) cannot turn right back around and say “That’s preposterous!” when faced with the idea that a zombie universe is conceivable. Science cannot say:
a) "We can say NOTHING about the nature of subjective conscious experience, and
b) "Oh, sorry, I forgot: there is one thing we can say about it after all: it is Preposterous
that a world could exist in which subjective conscious experience did not exist, but
where everything else was the same!"
Your misunderstanding comes from not appreciating that this is the conundrum on which the whole argument is based.
Instead, you just fell into the trap and tried to use “Huh!?” as a scientific response.
Finally, in case the point needs to be explained: why does the “Huh!” response not work? Try to apply it to this parallel case. Suppose you are trying to tell whether there is a possibility of a liar faking their emotions. You know: kid suspected of stealing cookies, and kid cries and emotes and pleads with Mother to believe that she didn’t do it. Is it logically possible for the kid to give a genuine-looking display of innocence, while at the same time being completely guilty inside? If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?
According to your approach, you could just simply laugh and say “Huh?”, and then declare that “the Fake-Innocence Argument may be a candidate for the most deranged idea in all of philosophy.”
Eliezer’s article is actually quite long, and not the only article he’s written on the subject on this site—it seems uncharitable to decide that “Huh?” is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists—it simply wouldn’t be very reductionist, would it?
The “Huh?” part was then elaborated, but the elaboration itself added nothing to the basic “Huh?” argument: he simply appealed to the idea that this is self-evidently preposterous. He did also pursue other arguments (as you say: there were many more words), but the rest involved extrapolations and extensions, all of which were either strawmen or irrelevant.
If you disagree, you should really find the supporting arguments of his that you believe I overlooked. I see none.
You have misunderstood the argument completely. You say “I know I’m speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.” Melodrama, this, but I would advise focusing on the first part of the phrase (“But based on my limited experience....”) if you want to make progress.
The ‘limited experience’ caveat serves to allow that Eliezer may be unfamiliar with something in philosophy that is even more deranged than the Zombie argument—a necessary concession if he is to make the claim ‘most deranged’. It isn’t intended to concede any ignorance of the zombie argument itself, which he quite clearly understands.
Your claim (”… the zombie argument itself, which he quite clearly understands....”) is entirely unsupported. I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer’s claims are in a standard class of amateur misconstruals of the zombie argument.
Old, old counterarguments, in other words, that were dealt with a long time ago.
Your arbitrary declaration that he “quite clearly understands” the zombie argument do nothing to show that he does.
Your arbitrary declaration that he “quite clearly understands” the zombie argument do nothing to show that he does.
This is true. My arbitrary declaration of comprehension is very nearly as meaningless as your claim to the contrary. The two combined do serve to at least establish controversy. That means readers are reminded to think critically about what they read and arrive at their own judgement through whatever evidence gathering mechanisms they have in place.
I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer’s claims are in a standard class of amateur misconstruals of the zombie argument.
I know many philosophers who would indeed dismiss Eliezer’s position as naive. And to be fair the position is utterly naive. The question is whether the sophisticated alternative is a load of rent seeking crock founded on bullshit. (And, on the other hand, I also know some philsophers whose thinking I do respect!)
But isn’t it the point that Science specifically IS actually going around saying things about subjective consciousness? Namely that apparently it is a causal result of the way your cerebral neurons interact, to paraphrase Yudkowsky “Consciousness is made of atoms.” You cannot take away consciousness and still have the same thing. Consciousness-testing is a one place function.
Quine’s view of philosophy, which appears to be generally accepted here on LW, says that ultimately all philosophy is psychology, so is it not a better and more productive idea to ask “Why do we talk so passionately of this strange property called consciousness?”
This is not correct. Science is not making any claims about subjective consciousness. It makes claims about other meanings of the term “consciousness”, but about subjective phenomenology it is silent or incoherent. For example, the claim “Consciousness is made of atoms” is just silliness. What type of atoms? Boron? Carbon? Hydrogen? And in virtue of what feature of atoms, is red the way it is?
If you read this mini-sequence and say you can imagine a Zombie Mary in this kind of detail, then I declare your intuition broken. By which I mean, we’d have to drop the topic or ask if one type of intuition has more reason to work (given what science tells us).
Your consciousness is made of atoms. Not a single kind of atom, but many different kinds. I cannot recite the entirety of the human biochemistry from heart, but I am sure it is readily available somewhere in peer reviewed publications. The fact of the matter is that your consciousness is a program running on the specialized wetware that is your brain. It might be possible to run your consciousness in a microanatomical computersimulation, but a microana sim is still run on a computer made of atoms.
Now iformation theoretically it must be possible to say something about this consciousness property that some progams exhibit and others don’t, or maybe there isn’t a hard and fast point where consciousness is defined and it is in fact a continuous spectrum. I don’t know, but if I am to bet I say the latter.
There must also then be some way of making definite statements about how that conscious program will act if it is copied from one medium (human) to another (microanatomical sim).
The information theoretical facts does not change that the computer or the brain that runs the conscious program is still a real physical thing. So we can with science say something about the computation substrate which is made of atoms, about the consciousness property which is information theory, and about the nature of copying a mind which is also information theory.
Now, are you telling me that information theory, chemsitry and electrical engineering are not sciences?
Upvoted for pointing out that the post fails to address a basic issue.
However, I don’t think anything said in the post is really wrong. Your characterization of the zombie argument appears to be this:
A1: Science can say nothing about the nature of subjective experience.
A2: If science can say nothing about the nature of subjective experience, then science must leave open the possibility of zombies.
Conclusion: Science leaves open the possibility of zombies.
The “long version” of the zombie argument has much to say in order to establish A1 and A2. However, the essence of A1 was (in my understanding) established as a philosophical idea long before the zombie argument. If I understand your complaint, it is that Eliezer is not really addressing A2 at all, which is the meat of the zombie argument; rather, in rejecting the conclusion, he is rejecting A1. So, for a more complete argument, he could have directly addressed the idea of the “hard problem of consciousness” and its relationship to empirical science. (Perhaps he does this in other posts; I haven’t read ’em all...)
EDIT:
I now have a different understanding (thanks to talking to Richard elsewhere). The point of the zombie argument, in this understanding, is to distinguish “the hard problem of consciousness” from other problems (especially, the neurological problem). Eliezer argues by identifying belief in Zombies with epiphenomenalism; but this seems to require the wrong form of “possible”.
If the zombie argument is meant to establish that given an explanation for the neurological problem, we would still need an explanation for the hard problem, then the notion of “possible” that is relevant is “possible given a theory explaining neurological consciousness”. The zombie argument relies on our intuitions to conclude that, given such a theory, we could still not rule out philosophical zombies.
This does not imply epiphenomenalism because it does not imply that zombies are causally possible. It only argues the need for more statements to rule them out.
That said—if Eliezer is simply denying the intuition that the zombie argument relies on (the intuition that there is something about consciousness that would be left unexplained after we had a physical theory of consciousness, so that such a theory leaves open the possibility of zombies), then that’s “fair game”.
So, for a more complete argument, he could have directly addressed the idea of the “hard problem of consciousness” and its relationship to empirical science.
He could have, but, logically speaking, he doesn’t need to. If he rejects the premise A1, he can then reject the conclusion as well, even if the reason A2 is logically valid—since rejecting A1 renders the conclusion unsound.
So I suppose the question, for someone who wishes to rescue their opposition to zombies as logically possible entities, is what else they open the door to if they concede “You’re right, science does have something to say about conscious experience after all. One thing science has to say about conscious experience is that a given physical state of the world either gives rise to conscious experience, or it doesn’t; the same state of the world cannot do both.”
That seems a relatively safe move to me.
All of that said, your analogy to Fake-Innocence is a bit of a bait-and-switch. The idea that two different systems (including the same individual at different times) can demonstrate identical behavior that is in one case the result of a specified mental state (innocence, consciousness, pain, what-have-you) and in the other case is not is very different from the idea that two identical systems (“identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion”) can have the mental state in one case and not in the other.
It’s not clear to me that incredulity is inappropriate with respect to the second claim, except in the sense that it’s impolite.
About Science making the claim “You’re right, science does have something to say about conscious experience after all … [namely] … that a given physical state of the world either gives rise to conscious experience, or it doesn’t; the same state of the world cannot do both.”
This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.
And don’t forget: Chalmers’ goal is to say “IF there is a logical possibility that in another imaginable kind of universe a thing X does not exist (where it exists in this one), THEN this thing X is a valid subject of questions about its nature.”
That is a truly fundamental aspect of epistemology—one of the bedrock assumptions accepted by philosophers—so all Chalmers is doing is employing it. Chalmers did not invent that line of argument.
About the analogy. It only looks like a bait and switch because I did not spell out the implications properly. I should have asked what would happen if there was no possible way for internal inspection of mental state to be done. If, for some reason, we could not do any physics to say what went on inside the mind when it was either telling the truth or lying, would it be valid to deploy that appeal to preposterousness? You must keep my assumption in order to understand the analogy, because I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence. (Imagine, if you will, a universe in which the crucial mental process that determined intention to tell the truth versus intention to deceive was actually located inside some kind of quantum field subject to an uncertainty principle, in such a way that external knowledge of the state was forbidden).
My point is that if we lived in such a universe, and if Eliezer poured scorn on the idea of Appearance-Of-Innocence without Intention-To-Be-Genuine, his appeal would be transparently empty.
I have no idea what dignity has to do with anything here.
As for the analogy… sure, if we discard the assertion that the two systems are physically identical, then there’s no problem. Agreed. The idea that two systems can demonstrate the same behavior at some level of analysis (e.g., they both utter “Hey! I’m conscious!”), where one of them is conscious and one isn’t, isn’t problematic at all.
It’s also not the claim the essay you’re objecting to was objecting to.
This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.
It isn’t solution by fiat; the idea isn’t to add just that statement to science. Rather, the idea is that such a statement already seems probable from basic scientific considerations such as those discussed in the post.
EDIT:
I see now that this is not relevant. The point of the zombie argument is not to refute such considerations, but rather, to illustrate the difference between “the hard problem of consciousness” and other sorts of consciousness.
I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence.
So, if we have knowledge that cannot possibly be observed in the physical world, then that proves that there is something else going on? Are you saying, for example, that we somehow know both the position and momentum of a particle with a precision greater than that allowed by the Heisenberg Uncertainty Principle, and that this gives rise to us either knowing that we are lying or knowing the we are telling the truth?
Well sure, if you start out with the given premise that breaks the laws of physics as we know them, of course you are going to conclude that there is something beyond “mere atoms”. Suppose we know that the sky is actually green, even though all of physics says it should be blue. Clearly our map (aka the laws of physics as we currently know them) doesn’t match the territory (the stuff that’s causing our observations). But it doesn’t seem to be necessary to resort to such wild hypotheses, because it is still quite plausible that consciousness emerges from “mere atoms”. We just don’t know the details of how yet, but we’re working on it. If someday we have a full understanding of the brain, and there doesn’t seem to be anything there to give rise to consciousness, then such wild speculation will be warranted. Today though, the substance dualism argument has no evidence behind it, and therefore an infinitesimally small probability of being true.
Hello. You state that “it is still quite plausible that consciousness emerges from “mere atoms” ”, but you do not explain why you make that statement. In fact you say that one day it will all be totally clear, even if it isn’t yet right now.
I might be wrong, but that’s why I’m asking: Is it not possible to say that about anything?
If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?
Logically possible, yes. But in practice, you could not use outward signs of emotion to determine whether anyone was lying. If, somehow, there were no other ways to determine whether people other than yourself were lying (preposterous, yes, but bear with my thought experiment for a moment) -- then the best you could do is to say, “well, I know that I sometimes lie, but everyone else has no capacity for lies at all, as far as I can ever know”). In other words, you’d have arrived at a sort of deception-solipsism. Would you agree ?
I would think that the better analogy would be “Well, I know that I sometimes tell the truth, but so far as I can ever know, the utterances of other people bear no special relationship to the truth”. I find it to be a better analogy because, in this view, we could try to introduce “philosophical liars”: people who appear to be truthful in every way, but are merely putting up facades, with no inherent truth-connection behind their words.
You have misunderstood the argument completely. You say “I know I’m speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.” Melodrama, this, but I would advise focusing on the first part of the phrase (“But based on my limited experience....”) if you want to make progress.
The main point of the zombie argument is that if science is so completely helpless that it can say nothing—even in principle—about the subjective phenomenology of consciousness (and by widespread consensus, this appears to be the case), then the possibility of a parallel universe in which that particular aspect is missing (i.e. the Zombie universe) cannot be ruled out. This Can’t-Rule-It-Out aspect is what Chalmers is deploying.
He is NOT saying that we should believe in a parallel zombie universe (a common misunderstandinga among amateur philosophers), he is saying that IF science decides to do a certain kind of washing-its-hands on the whole phenomenology of consciousness idea THEN it follows that philosophers can declare that it is logically possible for there to be a parallel universe in which the thing is missing. It is that logical entailment that is being exploited as a way to come to a particular conclusion about the nature of consciousness.
Specifically, Chalmers then goes on to say that the very nature of subjective phenomenology is that we have privileged access to it, and we are able to assert its existence in some way. It is the conflict between privileged access and logical possibility of absence, that drives the various zombie arguments.
But notice what I said about science washing its hands. If science declares that there really is absolutely nothing it can say about pure subjective phenomenology, science cannot then try to have its cake and eat it too. Science (or rather you, with remarks like “I think I speak for all reductionists when I say Huh?”) cannot turn right back around and say “That’s preposterous!” when faced with the idea that a zombie universe is conceivable. Science cannot say:
Your misunderstanding comes from not appreciating that this is the conundrum on which the whole argument is based.
Instead, you just fell into the trap and tried to use “Huh!?” as a scientific response.
Finally, in case the point needs to be explained: why does the “Huh!” response not work? Try to apply it to this parallel case. Suppose you are trying to tell whether there is a possibility of a liar faking their emotions. You know: kid suspected of stealing cookies, and kid cries and emotes and pleads with Mother to believe that she didn’t do it. Is it logically possible for the kid to give a genuine-looking display of innocence, while at the same time being completely guilty inside? If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?
According to your approach, you could just simply laugh and say “Huh?”, and then declare that “the Fake-Innocence Argument may be a candidate for the most deranged idea in all of philosophy.”
Eliezer’s article is actually quite long, and not the only article he’s written on the subject on this site—it seems uncharitable to decide that “Huh?” is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists—it simply wouldn’t be very reductionist, would it?
The “Huh?” part was then elaborated, but the elaboration itself added nothing to the basic “Huh?” argument: he simply appealed to the idea that this is self-evidently preposterous. He did also pursue other arguments (as you say: there were many more words), but the rest involved extrapolations and extensions, all of which were either strawmen or irrelevant.
If you disagree, you should really find the supporting arguments of his that you believe I overlooked. I see none.
The ‘limited experience’ caveat serves to allow that Eliezer may be unfamiliar with something in philosophy that is even more deranged than the Zombie argument—a necessary concession if he is to make the claim ‘most deranged’. It isn’t intended to concede any ignorance of the zombie argument itself, which he quite clearly understands.
Your claim (”… the zombie argument itself, which he quite clearly understands....”) is entirely unsupported. I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer’s claims are in a standard class of amateur misconstruals of the zombie argument.
Old, old counterarguments, in other words, that were dealt with a long time ago.
Your arbitrary declaration that he “quite clearly understands” the zombie argument do nothing to show that he does.
This is true. My arbitrary declaration of comprehension is very nearly as meaningless as your claim to the contrary. The two combined do serve to at least establish controversy. That means readers are reminded to think critically about what they read and arrive at their own judgement through whatever evidence gathering mechanisms they have in place.
I know many philosophers who would indeed dismiss Eliezer’s position as naive. And to be fair the position is utterly naive. The question is whether the sophisticated alternative is a load of rent seeking crock founded on bullshit. (And, on the other hand, I also know some philsophers whose thinking I do respect!)
But isn’t it the point that Science specifically IS actually going around saying things about subjective consciousness? Namely that apparently it is a causal result of the way your cerebral neurons interact, to paraphrase Yudkowsky “Consciousness is made of atoms.” You cannot take away consciousness and still have the same thing. Consciousness-testing is a one place function.
Quine’s view of philosophy, which appears to be generally accepted here on LW, says that ultimately all philosophy is psychology, so is it not a better and more productive idea to ask “Why do we talk so passionately of this strange property called consciousness?”
This is not correct. Science is not making any claims about subjective consciousness. It makes claims about other meanings of the term “consciousness”, but about subjective phenomenology it is silent or incoherent. For example, the claim “Consciousness is made of atoms” is just silliness. What type of atoms? Boron? Carbon? Hydrogen? And in virtue of what feature of atoms, is red the way it is?
If you read this mini-sequence and say you can imagine a Zombie Mary in this kind of detail, then I declare your intuition broken. By which I mean, we’d have to drop the topic or ask if one type of intuition has more reason to work (given what science tells us).
Your consciousness is made of atoms. Not a single kind of atom, but many different kinds. I cannot recite the entirety of the human biochemistry from heart, but I am sure it is readily available somewhere in peer reviewed publications. The fact of the matter is that your consciousness is a program running on the specialized wetware that is your brain. It might be possible to run your consciousness in a microanatomical computersimulation, but a microana sim is still run on a computer made of atoms.
Now iformation theoretically it must be possible to say something about this consciousness property that some progams exhibit and others don’t, or maybe there isn’t a hard and fast point where consciousness is defined and it is in fact a continuous spectrum. I don’t know, but if I am to bet I say the latter.
There must also then be some way of making definite statements about how that conscious program will act if it is copied from one medium (human) to another (microanatomical sim).
The information theoretical facts does not change that the computer or the brain that runs the conscious program is still a real physical thing. So we can with science say something about the computation substrate which is made of atoms, about the consciousness property which is information theory, and about the nature of copying a mind which is also information theory.
Now, are you telling me that information theory, chemsitry and electrical engineering are not sciences?
Upvoted for pointing out that the post fails to address a basic issue.
However, I don’t think anything said in the post is really wrong. Your characterization of the zombie argument appears to be this:
The “long version” of the zombie argument has much to say in order to establish A1 and A2. However, the essence of A1 was (in my understanding) established as a philosophical idea long before the zombie argument. If I understand your complaint, it is that Eliezer is not really addressing A2 at all, which is the meat of the zombie argument; rather, in rejecting the conclusion, he is rejecting A1. So, for a more complete argument, he could have directly addressed the idea of the “hard problem of consciousness” and its relationship to empirical science. (Perhaps he does this in other posts; I haven’t read ’em all...)
EDIT:
I now have a different understanding (thanks to talking to Richard elsewhere). The point of the zombie argument, in this understanding, is to distinguish “the hard problem of consciousness” from other problems (especially, the neurological problem). Eliezer argues by identifying belief in Zombies with epiphenomenalism; but this seems to require the wrong form of “possible”.
If the zombie argument is meant to establish that given an explanation for the neurological problem, we would still need an explanation for the hard problem, then the notion of “possible” that is relevant is “possible given a theory explaining neurological consciousness”. The zombie argument relies on our intuitions to conclude that, given such a theory, we could still not rule out philosophical zombies.
This does not imply epiphenomenalism because it does not imply that zombies are causally possible. It only argues the need for more statements to rule them out.
That said—if Eliezer is simply denying the intuition that the zombie argument relies on (the intuition that there is something about consciousness that would be left unexplained after we had a physical theory of consciousness, so that such a theory leaves open the possibility of zombies), then that’s “fair game”.
He could have, but, logically speaking, he doesn’t need to. If he rejects the premise A1, he can then reject the conclusion as well, even if the reason A2 is logically valid—since rejecting A1 renders the conclusion unsound.
Nicely argued.
So I suppose the question, for someone who wishes to rescue their opposition to zombies as logically possible entities, is what else they open the door to if they concede “You’re right, science does have something to say about conscious experience after all. One thing science has to say about conscious experience is that a given physical state of the world either gives rise to conscious experience, or it doesn’t; the same state of the world cannot do both.”
That seems a relatively safe move to me.
All of that said, your analogy to Fake-Innocence is a bit of a bait-and-switch. The idea that two different systems (including the same individual at different times) can demonstrate identical behavior that is in one case the result of a specified mental state (innocence, consciousness, pain, what-have-you) and in the other case is not is very different from the idea that two identical systems (“identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion”) can have the mental state in one case and not in the other.
It’s not clear to me that incredulity is inappropriate with respect to the second claim, except in the sense that it’s impolite.
About Science making the claim “You’re right, science does have something to say about conscious experience after all … [namely] … that a given physical state of the world either gives rise to conscious experience, or it doesn’t; the same state of the world cannot do both.”
This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.
And don’t forget: Chalmers’ goal is to say “IF there is a logical possibility that in another imaginable kind of universe a thing X does not exist (where it exists in this one), THEN this thing X is a valid subject of questions about its nature.”
That is a truly fundamental aspect of epistemology—one of the bedrock assumptions accepted by philosophers—so all Chalmers is doing is employing it. Chalmers did not invent that line of argument.
About the analogy. It only looks like a bait and switch because I did not spell out the implications properly. I should have asked what would happen if there was no possible way for internal inspection of mental state to be done. If, for some reason, we could not do any physics to say what went on inside the mind when it was either telling the truth or lying, would it be valid to deploy that appeal to preposterousness? You must keep my assumption in order to understand the analogy, because I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence. (Imagine, if you will, a universe in which the crucial mental process that determined intention to tell the truth versus intention to deceive was actually located inside some kind of quantum field subject to an uncertainty principle, in such a way that external knowledge of the state was forbidden).
My point is that if we lived in such a universe, and if Eliezer poured scorn on the idea of Appearance-Of-Innocence without Intention-To-Be-Genuine, his appeal would be transparently empty.
I have no idea what dignity has to do with anything here.
As for the analogy… sure, if we discard the assertion that the two systems are physically identical, then there’s no problem. Agreed. The idea that two systems can demonstrate the same behavior at some level of analysis (e.g., they both utter “Hey! I’m conscious!”), where one of them is conscious and one isn’t, isn’t problematic at all.
It’s also not the claim the essay you’re objecting to was objecting to.
That’s why I classed it as a Bait and Switch.
It isn’t solution by fiat; the idea isn’t to add just that statement to science. Rather, the idea is that such a statement already seems probable from basic scientific considerations such as those discussed in the post.
EDIT:
I see now that this is not relevant. The point of the zombie argument is not to refute such considerations, but rather, to illustrate the difference between “the hard problem of consciousness” and other sorts of consciousness.
So, if we have knowledge that cannot possibly be observed in the physical world, then that proves that there is something else going on? Are you saying, for example, that we somehow know both the position and momentum of a particle with a precision greater than that allowed by the Heisenberg Uncertainty Principle, and that this gives rise to us either knowing that we are lying or knowing the we are telling the truth?
Well sure, if you start out with the given premise that breaks the laws of physics as we know them, of course you are going to conclude that there is something beyond “mere atoms”. Suppose we know that the sky is actually green, even though all of physics says it should be blue. Clearly our map (aka the laws of physics as we currently know them) doesn’t match the territory (the stuff that’s causing our observations). But it doesn’t seem to be necessary to resort to such wild hypotheses, because it is still quite plausible that consciousness emerges from “mere atoms”. We just don’t know the details of how yet, but we’re working on it. If someday we have a full understanding of the brain, and there doesn’t seem to be anything there to give rise to consciousness, then such wild speculation will be warranted. Today though, the substance dualism argument has no evidence behind it, and therefore an infinitesimally small probability of being true.
Hello. You state that “it is still quite plausible that consciousness emerges from “mere atoms” ”, but you do not explain why you make that statement. In fact you say that one day it will all be totally clear, even if it isn’t yet right now.
I might be wrong, but that’s why I’m asking: Is it not possible to say that about anything?
Logically possible, yes. But in practice, you could not use outward signs of emotion to determine whether anyone was lying. If, somehow, there were no other ways to determine whether people other than yourself were lying (preposterous, yes, but bear with my thought experiment for a moment) -- then the best you could do is to say, “well, I know that I sometimes lie, but everyone else has no capacity for lies at all, as far as I can ever know”). In other words, you’d have arrived at a sort of deception-solipsism. Would you agree ?
I would think that the better analogy would be “Well, I know that I sometimes tell the truth, but so far as I can ever know, the utterances of other people bear no special relationship to the truth”. I find it to be a better analogy because, in this view, we could try to introduce “philosophical liars”: people who appear to be truthful in every way, but are merely putting up facades, with no inherent truth-connection behind their words.