the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Which part does LW disagree with and why?
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
(Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
you have to buy into several assumptions which basically are the argument.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
you have to buy into several assumptions which basically are the argument.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
Presenting a trivial conclusion from nontrivial premises as a nontrivial conclusion seems suspicious
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Beginning an argument for the existence of qualia with a bare assertion that they exist
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
I take it you disagree with step one, that qualia exists?
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
I do think essentially the same argument goes through for free will
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
Free will isn’t something you have, it’s something you feel.
So you say. It is not standardly defined that way.
Same for “qualia”.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
I do think essentially the same argument goes through for free will
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
Beginning an argument for the existence of qualia with a bare assertion that they exist
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
[Edited for terminology.]
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
Y’know, you’re right. Trivial is not the right word at all.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Of course not!
and why not?
Because the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
I don’t understand what else is there.
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
There’s no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.
How do you know this?
There’s no certainty either way.
Reduction is an explanatory process: a mere observed correlation does not qualify.
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
This is probably a good answer to that question.
Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.
So you say. It is not standardly defined that way.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
Well, would that mean writing a series like this?
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
Who said anything about our intuitions (except you, of course)?
You keep making statements like,
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
I guess you need to do some more thinking to straighten out your views on qualia.
Goodnight, Aaron Swartz.
downvoted posthumously.
Let’s back up for a second:
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
Or you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter.
If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
[*EDIT: here, that is]
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.