Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery?
I am noting thar equivlant talk must be included in functional equivalence.
Why not just build a regular qualia engine, by copying the meat-brain processes 1:1?
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine. Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
I am noting thar equivlant talk must be included in functional equivalence.
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.