I’m trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn’t perfect because gensyms are leaky abstractions. But I don’t think it has to be to convey the essential idea. Analogies rarely are perfect.
Here’s my understanding of the point. Let’s say that I’m looking at something, and I say, “that’s a car”. You ask me, “how do you know it’s a car?” And I say, “it’s in a parking lot, it looks like a car...” You say, “and what does a car look like?” And maybe I try to describe the car in some detail. Let’s say I mention that the car has windows, and you ask, “what does a window look like”. I mention glass, and you ask, “what does glass look like”. We keep drilling down. Every time I describe something, you ask me about one of the components of the description.
This can’t go on forever. It has to stop. It stops somewhere. It stops where I say, “I see X”, and you ask, “describe X”, and I say, “X looks like X”—I’m no longer able to give a description of the thing in terms of component parts or aspects. I’ve reached the limit.
There has to be a limit, because the mind is not infinite. There have to be things which I can perceive, which I can recognize, but which I am unable to describe—except to say that they look like themselves, that I recognize them. This is unavoidable. Create for me any AI that has the ability to perceive, and we can drill down the same way with that AI, finally reaching something about which the AI says, “I see X”, and when we ask the AI what X looks like, the AI is helpless to say anything but, “it looks like X”.
Any finite creature (carbon or silicon) that can perceive, has some limit, where it can perceive a thing, but can’t describe it except to say that it looks like itself. The creature just knows, it clearly sees that thing, but for the life of it, it can’t give a description of it. But since the creature can clearly see it, the creature can say that it has a “raw feel”.
These things are ineffable—indescribable. And it’s ineffability that is one of the key properties of qualia. The four properties given by Dennett (from Wpedia) are:
ineffable; that is, they cannot be communicated, or apprehended by any other means than direct experience.
intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.
private; that is, all interpersonal comparisons of qualia are systematically impossible.
directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.
As for the other three. Well, they would take a book.
But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like “pleasant”, “unpleasant” and “indifferent”. I can tell you that certain meat tastes like chicken, and you can understand what I mean by “taste”, and understand the gist of “like chicken” even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it’s like to have qualia to something that has no qualia whatsoever, but even that I think is just a failure of creativity rather than a theoretical impossibility -- [ETA: indeed, before I could create a conscious AI, I’d in some sense have to figure out how to provide exactly such a description to a computer.]
I apologize if this is recapitulating earlier comments—I haven’t read this entire discussion—and feel free to point me to a different thread if you’ve covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like “pleasant” and “unpleasant” and “indifferent”? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by “taste” and understand the gist of “like chicken”?
If not, then on your view, what would actually happen instead, if it tried? (Or, if trying is another thing that can’t be a computation, then: if it simulated me trying?)
If so, then on your view, how can any of those operations qualify as comparing qualia?
I apologize if this is recapitulating earlier comments—I haven’t read this entire discussion—and feel free to point me to a different thread if you’ve covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like “pleasant” and “unpleasant” and “indifferent”? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by “taste” and understand the gist of “like chicken”?
I’m not certain what you mean by “could a simulation of me do X”. I’ll read it as “could a simulator of me of do X”. And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.
a computer program could make those judgements (sic) without actually experiencing any of those qualia
Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you “know” that a computer isn’t experiencing anything… and that’s what your entire argument rests on.
However, this “knowing” is just an assumption, and it’s assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?
And there is no reason apart from that intuition/assumption, to treat this as a different question from, “does it make sense to speak of a brain experiencing something?”.
IOW, substitute “brain” for every use of “computer” or “simulation”, and make the same assertions. “The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything.”
By making brains special, you’re privileging the qualia hypothesis based on an intuitive assumption.
I don’t think you read my post very carefully. I didn’t claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
Actually, I’d say you need to make a case for WTF “qualia” means in the first place. As far as I’ve ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what “human qualia” reduce to, specifically, then we’ll be able to simulate them.
That’s actually a pretty good operational definition of “reduce”, actually. ;-) (Not to mention “know”.)
Sure, ^simulator^simulation preserves everything relevant from my pov.
And thanks for the answer.
Given that, I really don’t get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there.
After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can’t play any necessary role in performing those tasks.
It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations.
The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can’t do, that’s a different matter, and my concern would not apply.
(Just to be clear: it’s not obvious to me that your simulator can’t taste chicken, but I don’t think that discussion is profitable, for reasons I discuss here.)
I’m trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn’t perfect because gensyms are leaky abstractions. But I don’t think it has to be to convey the essential idea. Analogies rarely are perfect.
You haven’t responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that’s independent of any such choice. In the the first case, then the architecture you want is probably “the universe”, in which case you’re defining an algorithm by specifying its physical implementation and you’ve affirmed my thesis. In the latter case, all you get to talk about is inputs and outputs, not algorithms.
You seem to be mixing up two separate arguments. In one argument I am for the sake of argument assuming the unproblematic existence of qualia and arguing, under this assumption, that qualia are possible in a simulation and therefore that we could (in principle) be living in a simulation. In the other argument (the current one) I simply answered your question about what sort of qualia skeptic I am.
So, in this argument, the current one, I am continuing the discussion where, in answer to your question, I have admitted to being a qualia skeptic more or less along the lines of Drescher and Dennett. This discussion is about my skepticism about the idea of qualia. This discussion is not about whether I think qualia are computations. It is about my skepticism.
Similarly, if I were admitting to skepticism about Santa Claus, it would not be an appropriate place to argue with me about whether Santa is a human or an elf.
Maybe you are basing your current focus on computations on Drescher’s analogy with Lisp’s gensyms. That’s something for you to take up with Drescher. By now I’ve explained—at some length—what it is that resonated with me in Drescher’s account and why. It doesn’t depend on qualia being computations. It depends on there being a limit to perception.
On further reflection, I’m not certain that your position and mine are incompatible. I’m a personal identity skeptic in roughly the same sense that you’re a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons “someone must have opened it”, I don’t consider that reasoning invalid. I just think the need to modify the word “someone” if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn’t really contradict my claim that the objects of a computer simulation would have no (or improper ) qualia; at worst it means that I ought to slightly modify my description of what it is that those objects wouldn’t have.
Ok, I’ve really misunderstood you then. I didn’t realize that you were taking a devil’s advocate position in the other thread. I maintain the arguments I’ve made in both threads in challenge to all those commenters who do claim that qualia are computation.
I’m trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn’t perfect because gensyms are leaky abstractions. But I don’t think it has to be to convey the essential idea. Analogies rarely are perfect.
Here’s my understanding of the point. Let’s say that I’m looking at something, and I say, “that’s a car”. You ask me, “how do you know it’s a car?” And I say, “it’s in a parking lot, it looks like a car...” You say, “and what does a car look like?” And maybe I try to describe the car in some detail. Let’s say I mention that the car has windows, and you ask, “what does a window look like”. I mention glass, and you ask, “what does glass look like”. We keep drilling down. Every time I describe something, you ask me about one of the components of the description.
This can’t go on forever. It has to stop. It stops somewhere. It stops where I say, “I see X”, and you ask, “describe X”, and I say, “X looks like X”—I’m no longer able to give a description of the thing in terms of component parts or aspects. I’ve reached the limit.
There has to be a limit, because the mind is not infinite. There have to be things which I can perceive, which I can recognize, but which I am unable to describe—except to say that they look like themselves, that I recognize them. This is unavoidable. Create for me any AI that has the ability to perceive, and we can drill down the same way with that AI, finally reaching something about which the AI says, “I see X”, and when we ask the AI what X looks like, the AI is helpless to say anything but, “it looks like X”.
Any finite creature (carbon or silicon) that can perceive, has some limit, where it can perceive a thing, but can’t describe it except to say that it looks like itself. The creature just knows, it clearly sees that thing, but for the life of it, it can’t give a description of it. But since the creature can clearly see it, the creature can say that it has a “raw feel”.
These things are ineffable—indescribable. And it’s ineffability that is one of the key properties of qualia. The four properties given by Dennett (from Wpedia) are:
ineffable; that is, they cannot be communicated, or apprehended by any other means than direct experience.
intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.
private; that is, all interpersonal comparisons of qualia are systematically impossible.
directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.
As for the other three. Well, they would take a book.
But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like “pleasant”, “unpleasant” and “indifferent”. I can tell you that certain meat tastes like chicken, and you can understand what I mean by “taste”, and understand the gist of “like chicken” even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it’s like to have qualia to something that has no qualia whatsoever, but even that I think is just a failure of creativity rather than a theoretical impossibility -- [ETA: indeed, before I could create a conscious AI, I’d in some sense have to figure out how to provide exactly such a description to a computer.]
I apologize if this is recapitulating earlier comments—I haven’t read this entire discussion—and feel free to point me to a different thread if you’ve covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like “pleasant” and “unpleasant” and “indifferent”? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by “taste” and understand the gist of “like chicken”?
If not, then on your view, what would actually happen instead, if it tried? (Or, if trying is another thing that can’t be a computation, then: if it simulated me trying?)
If so, then on your view, how can any of those operations qualify as comparing qualia?
I’m not certain what you mean by “could a simulation of me do X”. I’ll read it as “could a simulator of me of do X”. And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.
Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you “know” that a computer isn’t experiencing anything… and that’s what your entire argument rests on.
However, this “knowing” is just an assumption, and it’s assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?
And there is no reason apart from that intuition/assumption, to treat this as a different question from, “does it make sense to speak of a brain experiencing something?”.
IOW, substitute “brain” for every use of “computer” or “simulation”, and make the same assertions. “The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything.”
By making brains special, you’re privileging the qualia hypothesis based on an intuitive assumption.
I don’t think you read my post very carefully. I didn’t claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
Actually, I’d say you need to make a case for WTF “qualia” means in the first place. As far as I’ve ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what “human qualia” reduce to, specifically, then we’ll be able to simulate them.
That’s actually a pretty good operational definition of “reduce”, actually. ;-) (Not to mention “know”.)
Sure, ^simulator^simulation preserves everything relevant from my pov.
And thanks for the answer.
Given that, I really don’t get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there.
After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can’t play any necessary role in performing those tasks.
It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations.
The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can’t do, that’s a different matter, and my concern would not apply.
(Just to be clear: it’s not obvious to me that your simulator can’t taste chicken, but I don’t think that discussion is profitable, for reasons I discuss here.)
You haven’t responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that’s independent of any such choice. In the the first case, then the architecture you want is probably “the universe”, in which case you’re defining an algorithm by specifying its physical implementation and you’ve affirmed my thesis. In the latter case, all you get to talk about is inputs and outputs, not algorithms.
You seem to be mixing up two separate arguments. In one argument I am for the sake of argument assuming the unproblematic existence of qualia and arguing, under this assumption, that qualia are possible in a simulation and therefore that we could (in principle) be living in a simulation. In the other argument (the current one) I simply answered your question about what sort of qualia skeptic I am.
So, in this argument, the current one, I am continuing the discussion where, in answer to your question, I have admitted to being a qualia skeptic more or less along the lines of Drescher and Dennett. This discussion is about my skepticism about the idea of qualia. This discussion is not about whether I think qualia are computations. It is about my skepticism.
Similarly, if I were admitting to skepticism about Santa Claus, it would not be an appropriate place to argue with me about whether Santa is a human or an elf.
Maybe you are basing your current focus on computations on Drescher’s analogy with Lisp’s gensyms. That’s something for you to take up with Drescher. By now I’ve explained—at some length—what it is that resonated with me in Drescher’s account and why. It doesn’t depend on qualia being computations. It depends on there being a limit to perception.
On further reflection, I’m not certain that your position and mine are incompatible. I’m a personal identity skeptic in roughly the same sense that you’re a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons “someone must have opened it”, I don’t consider that reasoning invalid. I just think the need to modify the word “someone” if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn’t really contradict my claim that the objects of a computer simulation would have no (or improper ) qualia; at worst it means that I ought to slightly modify my description of what it is that those objects wouldn’t have.
Ok, I’ve really misunderstood you then. I didn’t realize that you were taking a devil’s advocate position in the other thread. I maintain the arguments I’ve made in both threads in challenge to all those commenters who do claim that qualia are computation.