Charles: “Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn’t mean it’s the same cause that was originally there.”
Albert: “But I wouldn’t even have to tell you about the robot operation. You wouldn’t notice. If you think, going on introspective evidence, that you are in an important sense “the same person” that you were five minutes ago, and I do something to you that doesn’t change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn’t the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?”
How does Albert know that Charles;s consciousness hasn’t changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won’t report the change because of the functional equivalence of the change.
Charles: “Introspection isn’t perfect. Lots of stuff goes on inside my brain that I don’t notice.”
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change. “Introspection” is being used ambiguously here, between what is noticed and what is reported.
Albert: “Yeah, and I can detect the switch flipping! You’re detecting something that doesn’t make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you’ll talk just the same way afterward.”
Albert’s comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs,
There can mutliple causes of reports like “I see red”. Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,
Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery?
I am noting thar equivlant talk must be included in functional equivalence.
Why not just build a regular qualia engine, by copying the meat-brain processes 1:1?
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change.
I don’t think I understand what you’re saying here, what kind of change could you notice but not report?
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won’t be able to report it. You will
report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access—remember or think about—the change, if that is part of the preserved functionality, But if your experience changes, you can’t
fail to experience it).
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?
I don’t see anything very new here.
How does Albert know that Charles;s consciousness hasn’t changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won’t report the change because of the functional equivalence of the change.
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change. “Introspection” is being used ambiguously here, between what is noticed and what is reported.
Albert’s comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like “I see red”. Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
Are you sure you read Eliezer’s critique of Chalmers? This is exactly the error that Chalmers makes.
It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine. Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
I am noting thar equivlant talk must be included in functional equivalence.
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
I don’t think I understand what you’re saying here, what kind of change could you notice but not report?
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won’t be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access—remember or think about—the change, if that is part of the preserved functionality, But if your experience changes, you can’t fail to experience it).
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?