I apologize if this is recapitulating earlier comments—I haven’t read this entire discussion—and feel free to point me to a different thread if you’ve covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like “pleasant” and “unpleasant” and “indifferent”? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by “taste” and understand the gist of “like chicken”?
I’m not certain what you mean by “could a simulation of me do X”. I’ll read it as “could a simulator of me of do X”. And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.
a computer program could make those judgements (sic) without actually experiencing any of those qualia
Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you “know” that a computer isn’t experiencing anything… and that’s what your entire argument rests on.
However, this “knowing” is just an assumption, and it’s assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?
And there is no reason apart from that intuition/assumption, to treat this as a different question from, “does it make sense to speak of a brain experiencing something?”.
IOW, substitute “brain” for every use of “computer” or “simulation”, and make the same assertions. “The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything.”
By making brains special, you’re privileging the qualia hypothesis based on an intuitive assumption.
I don’t think you read my post very carefully. I didn’t claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
Actually, I’d say you need to make a case for WTF “qualia” means in the first place. As far as I’ve ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what “human qualia” reduce to, specifically, then we’ll be able to simulate them.
That’s actually a pretty good operational definition of “reduce”, actually. ;-) (Not to mention “know”.)
Sure, ^simulator^simulation preserves everything relevant from my pov.
And thanks for the answer.
Given that, I really don’t get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there.
After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can’t play any necessary role in performing those tasks.
It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations.
The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can’t do, that’s a different matter, and my concern would not apply.
(Just to be clear: it’s not obvious to me that your simulator can’t taste chicken, but I don’t think that discussion is profitable, for reasons I discuss here.)
I’m not certain what you mean by “could a simulation of me do X”. I’ll read it as “could a simulator of me of do X”. And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.
Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you “know” that a computer isn’t experiencing anything… and that’s what your entire argument rests on.
However, this “knowing” is just an assumption, and it’s assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?
And there is no reason apart from that intuition/assumption, to treat this as a different question from, “does it make sense to speak of a brain experiencing something?”.
IOW, substitute “brain” for every use of “computer” or “simulation”, and make the same assertions. “The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything.”
By making brains special, you’re privileging the qualia hypothesis based on an intuitive assumption.
I don’t think you read my post very carefully. I didn’t claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
Actually, I’d say you need to make a case for WTF “qualia” means in the first place. As far as I’ve ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what “human qualia” reduce to, specifically, then we’ll be able to simulate them.
That’s actually a pretty good operational definition of “reduce”, actually. ;-) (Not to mention “know”.)
Sure, ^simulator^simulation preserves everything relevant from my pov.
And thanks for the answer.
Given that, I really don’t get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there.
After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can’t play any necessary role in performing those tasks.
It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations.
The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can’t do, that’s a different matter, and my concern would not apply.
(Just to be clear: it’s not obvious to me that your simulator can’t taste chicken, but I don’t think that discussion is profitable, for reasons I discuss here.)