I was talking about Searle’s non-AI work, but since you brought it up, Searle’s view is:
qualia exists (because: we experience it)
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we’ve done things, and occasionally we notice and can report that we’ve noticed that we’ve done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called ‘quaila’. That we notice that we find experience ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving).
So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called ‘experiences’). There is nothing mysterious here, and the word ‘qualia’ always seems to be used mysteriously—so I don’t think the first point carries the weight it might appear to.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn’t be doing consciousness, doesn’t mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle’s philosophy.
That we notice that we find ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)That we notice that we find ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)
If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable—people can communicate about all sorts of things that aren’t sensations (and in many cases are abstract and “in the head”).
I’m not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?
All words (that mean anything) refer to something. When I talk about ‘guitars’, I remember experiences I’ve had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I’m just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we’ve learnt to refer certain experiences (words) to others (guitars).
Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).
Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle
to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.
You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don’t feel as though they are made up of anything more than themselves.
When we talk about atoms, however, that isn’t a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don’t experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don’t feel reducible.
Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn’t mean that our experience feels like its been reduced to atoms.
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren’t the problem for a turing machine they appear to be.
Also, we all share these experience ‘parts’ with most other humans, due to the psychological unity of humankind. So, if we’re all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease.
My original point, however, was just that the map isn’t the territory. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation, and you don’t get to make game-changing claims about the territory until you’ve made sure your map is pretty spot-on.
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought
is really complex neural activity as well, and we can comminicate and unpack concepts.
So, qualia aren’t the problem for a turing machine they appear to be.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation,
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.
I can’t really speak for LW as a whole, but I’d guess that among the people here who don’t believe¹ “qualia doesn’t exist”, 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the “boring AI” proposition, that you can make computers do reasoning, and Searle’s “strong AI” thing he’s trying to refute, which says that AIs running on computers would have both consciousness and some magical “intentionality”. “Strong AI” shouldn’t actually concern us, except in talking about EMs or trying to make our FAI non-conscious.
3. if you simulate a brain with a Turing machine, it won’t have qualia
Pretty much disagree.
qualia is clearly a basic fact of physics
Really disagree.
and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not
And this seems really unlikely.
¹ I qualify my statement like this because there is a long-standing confusion over the use of the word “qualia” as described in my parenthetical here.
Well, let’s be clear: the argument I laid out is trying to refute the claim that “I can create a human-level consciousness with a Turing machine”. It doesn’t mean you couldn’t create an AI using something other than a pure Turing machine and it doesn’t mean Turing machines can’t do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn’t going to keep you alive.
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to?
Something brains do, obviously. One way or another.
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can’t describe computation in physical terms. Searle just makes these assumptions.
If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
There’s your problem. Why the hell should we assume that “qualia is clearly a basic fact of physics ”?
Well, I probably can’t explain it as eloquently as others here—you should try the search bar, there are probably posts on the topic much better than this one—but my position would be as follows:
Qualia are experienced directly by your mind.
Everything about your mind seems to reduce to your brain.
Therefore, qualia are probably part of your brain.
Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can’t imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call “qualia” might simply be part of, y’know, data processing.
Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don’t really care what name we attach to those causes… what matters is the thing and how it relates to other things, not the label. That said, in general I think the label “qualia” causes more trouble due to conceptual baggage than it resolves, much like the label “soul”.
Re #2: This argument is oversimplistic, but I find the conclusion likely. More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it’s possible that the causes of those aspects reside outside my brain. That said, I don’t find it likely; I’m inclined to agree that the causes of my experience reside in my brain. I still don’t care much what label we attach to those causes, and I still think the label “qualia” causes more confusion due to conceptual baggage than it resolves.
Re #3: I see no reason at all to believe this. The causes of experience are no more “clearly a basic fact of physics” than the causes of gravity; all that makes them seem “clearly basic” to some people is the fact that we don’t understand them in adequate detail yet.
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Which part does LW disagree with and why?
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
(Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
you have to buy into several assumptions which basically are the argument.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
you have to buy into several assumptions which basically are the argument.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
Presenting a trivial conclusion from nontrivial premises as a nontrivial conclusion seems suspicious
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Beginning an argument for the existence of qualia with a bare assertion that they exist
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
I take it you disagree with step one, that qualia exists?
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
I do think essentially the same argument goes through for free will
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
Free will isn’t something you have, it’s something you feel.
So you say. It is not standardly defined that way.
Same for “qualia”.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
I do think essentially the same argument goes through for free will
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
Beginning an argument for the existence of qualia with a bare assertion that they exist
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
It isn’t even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome
of the brain’s concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn’t use the word “qualia”, although he often seems to be talking about the same thing).
I was talking about Searle’s non-AI work, but since you brought it up, Searle’s view is:
qualia exists (because: we experience it)
the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
if you simulate a brain with a Turing machine, it won’t have qualia (because: qualia is clearly a basic fact of physics and there’s no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)
Which part does LW disagree with and why?
To offer my own reasons for disagreement,
I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we’ve done things, and occasionally we notice and can report that we’ve noticed that we’ve done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called ‘quaila’. That we notice that we find experience ‘ineffable’ is not a surprise either—you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called ‘experiences’). There is nothing mysterious here, and the word ‘qualia’ always seems to be used mysteriously—so I don’t think the first point carries the weight it might appear to.
Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn’t be doing consciousness, doesn’t mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle’s philosophy.
If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable—people can communicate about all sorts of things that aren’t sensations (and in many cases are abstract and “in the head”).
I’m not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?
All words (that mean anything) refer to something. When I talk about ‘guitars’, I remember experiences I’ve had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I’m just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we’ve learnt to refer certain experiences (words) to others (guitars).
Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).
Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.
You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don’t feel as though they are made up of anything more than themselves.
When we talk about atoms, however, that isn’t a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don’t experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don’t feel reducible.
Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn’t mean that our experience feels like its been reduced to atoms.
Where is that heading? Is it supposed to tell my why qualia are ineffable....or rather, why qualia are more ineffable than cognition?
I’m saying that we should expect experience to feel as if made of fundamental, ineffable parts, even though we know that it is not. So, qualia aren’t the problem for a turing machine they appear to be.
Also, we all share these experience ‘parts’ with most other humans, due to the psychological unity of humankind. So, if we’re all sat down at an early age, and drilled with certain patterns of mind parts (times-tables), then we should expect to be able to draw upon them at ease.
My original point, however, was just that the map isn’t the territory. Qualia don’t get special attention just because they feel different. They have a perfectly natural explanation, and you don’t get to make game-changing claims about the territory until you’ve made sure your map is pretty spot-on.
I don ’t see why. Saying that eperience is really complex neurall activity isn’t enough to explain that, because thought is really complex neural activity as well, and we can comminicate and unpack concepts.
Can you write the code for SeeRed() ? Or are you saying that TMs would have ineffable concepts?
You’ve inverted the problem: you have creatd the expectation that nothing mental is effable.
No, I’m saying that no basic, mental part will feel effable. Using our cognition, we can make complex notions of atoms and guitars, built up in our minds, and these will explain why our mental aspects feel fundamental, but they will still feel fundamental.
I’m not continuing this discussion, it’s going nowhere new. I will offer Orthonormal’s sequence on qualia as explanatory however: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
You seem to be hinting, but are not quite saying, that qualia are basic and therefore ineffable, whilst thoughts are non-basic and therefore effable.
Confirming the above would be somewhere new.
I’m saying that there are (something like) certain constructs in the brain, that are used whenever the most simple conscious thought or feeling is expressed. They’re even used when we don’t choose to express something, like when we look at something. We immediately see it’s components (surfaces, legs, handles), and the ones we can’t break down (lines, colours) feel like the most basic parts of those representations in our minds.
Perhaps the construct that we identify as red, is set of neurons XYZ firing. If so, whenever we notice (that is, other sets of neurons observe) that XYZ go off, we just take it to be ‘red’. It really appears to be red, and none of the other workings of the neurons can break it any further. It feels ineffable, because we are not privy to everything that’s going on. We can simply use a very restricted portion of the brain, to examine other chunks, and give them different labels.
However, we can use other neuronal patterns, to refer to and talk about atoms. Large groups of complex neural firings can observe and reflect upon experimental results that show that the brain is made of atoms.
Now, even though we can build up a model of atoms, and prove that the basic features of conscious experience (redness, lines, the hearing of a middle C) are made of atoms, the fact is, we’re still using complex neuronal patterns to think about these. The atom may be fundamental, but it takes a lot of complexity for me to think about the atom. Consciousness really is reducible to atoms, but when I inspect consciousness, it still feels like a big complex set of neurons that my conscious brain can’t understand. It still feels fundamental.
Experientially, redness doesn’t feel like atoms because our conscious minds cannot reduce it in experience, but they can prove that it is reducible. People make the jump that, because complex patterns in one part of the brain (one conscious part) cannot reduce another (conscious) part to mere atoms, it must be a fundamental part of reality. However, this does not follow logically—you can’t assume your conscious experience can comprehend everything you think and feel at the most fundamental level, purely by reflection.
I feel I’ve gone on too long, in trying to give an example of how something could feel basic but not be. I’m just saying we’re not privy to everything that’s going on, so we can’t make massive knowledge claims about it i.e. that a turing-machine couldn’t experience what we’re experiencing, purely by appeal to reflection. We just aren’t reflectively transparent.
I can’t really speak for LW as a whole, but I’d guess that among the people here who don’t believe¹ “qualia doesn’t exist”, 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the “boring AI” proposition, that you can make computers do reasoning, and Searle’s “strong AI” thing he’s trying to refute, which says that AIs running on computers would have both consciousness and some magical “intentionality”. “Strong AI” shouldn’t actually concern us, except in talking about EMs or trying to make our FAI non-conscious.
Pretty much disagree.
Really disagree.
And this seems really unlikely.
¹ I qualify my statement like this because there is a long-standing confusion over the use of the word “qualia” as described in my parenthetical here.
Well, let’s be clear: the argument I laid out is trying to refute the claim that “I can create a human-level consciousness with a Turing machine”. It doesn’t mean you couldn’t create an AI using something other than a pure Turing machine and it doesn’t mean Turing machines can’t do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn’t going to keep you alive.
So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?
And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what’s the physical algorithm for looking at a series of physical particles and deciding whether it’s executing a particular computation or not?
Something brains do, obviously. One way or another.
I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can’t describe computation in physical terms. Searle just makes these assumptions.
If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.
They’re not assumptions, they’re the answers to questions that have the highest probability going for them given the evidence.
There’s your problem. Why the hell should we assume that “qualia is clearly a basic fact of physics ”?
Because it’s the only thing in the universe we’ve found with a first-person ontology. How else do you explain it?
Well, I probably can’t explain it as eloquently as others here—you should try the search bar, there are probably posts on the topic much better than this one—but my position would be as follows:
Qualia are experienced directly by your mind.
Everything about your mind seems to reduce to your brain.
Therefore, qualia are probably part of your brain.
Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can’t imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call “qualia” might simply be part of, y’know, data processing.
Another not-speaking-for-LW answer:
Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don’t really care what name we attach to those causes… what matters is the thing and how it relates to other things, not the label. That said, in general I think the label “qualia” causes more trouble due to conceptual baggage than it resolves, much like the label “soul”.
Re #2: This argument is oversimplistic, but I find the conclusion likely.
More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it’s possible that the causes of those aspects reside outside my brain. That said, I don’t find it likely; I’m inclined to agree that the causes of my experience reside in my brain. I still don’t care much what label we attach to those causes, and I still think the label “qualia” causes more confusion due to conceptual baggage than it resolves.
Re #3: I see no reason at all to believe this. The causes of experience are no more “clearly a basic fact of physics” than the causes of gravity; all that makes them seem “clearly basic” to some people is the fact that we don’t understand them in adequate detail yet.
The whole thing: it’s the Chinese Room all over again, a intuition pump that begs the very question it’s purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word “understanding” is fudged in the Chinese Room argument, but basically it’s the same.)
I suppose you could say that there’s a grudging partial agreement with your point number two: that “the brain causes qualia”. The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides “qualia”, e.g.:
Free will exists (because: we experience it)
The brain causes free will (because if you cut off any part, etc.)
If you simulate a brain with a Turing machine, it won’t have free will because clearly it’s a basic fact of physics and there’s no way to tell just using physics whether something is a machine simulating a brain or not.
It doesn’t matter what term you plug into this in place of “qualia” or “free will”, it could be “love” or “charity” or “interest in death metal”, and it’s still not saying anything more profound than, “I don’t think machines are as good as real people, so there!”
Or more precisely: “When I think of people with X it makes me feel something special that I don’t feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X ‘just a simulation’.” This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.
Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one’s feelings to Truth, irrespective of the evidence against them.
Just a nit pick: the argument Aaron presented wasn’t an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn’t beg the question. Aaron’s argument was an argument agains artificial consciousness.
Also, I think Aaron’s presentation of (3) was a bit unclear, but it’s not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won’t experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won’t count as conscious. If we don’t have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.
You’re right that you can plug many a term in to replace ‘qualia’, so long as those things are not reducible to purely physical descriptions. So you couldn’t plug in, say, heart-attacks.
Could you explain this a bit more? I don’t see how it’s relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle’s argument.
In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It’s “qualia are special because they’re special, QED”. I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.
When I said that our mind detection circuitry was the root of the argument, I didn’t mean that Searle was overtly arguing on the basis of his feelings. What I’m saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.
However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there’s no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.
Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.
I think there is a way that ripe tomatoes seem visually: how is that mind-projection.
But … if you’re assuming that qualia are “not reducible to purely physical descriptions”, and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he’s defending, aren’t they?
Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn’t present an argument for that, he just presented Searle’s argument against AI from that. But you’re right to ask for a defense of that premise, since it’s the crucial one and it’s (at the moment) undefended here.
Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he’s trying to trick listeners into accepting his conclusion even when their priors differ.
[Edited for terminology.]
Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.
In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.
Y’know, you’re right. Trivial is not the right word at all.
To pick a further nit, the argument is more that qualia can’t be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.
That’s a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can’t simulate consciousness. That’s not very neat, but I do believe it’s valid. Your alternative is plausible, but it requires my ‘turning machines are reducible to purely physical descriptions’ premise to be false.
Huh? This isn’t an argument for the existence of qualia—it’s an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?
I do think essentially the same argument goes through for free will, so I don’t find your reductio at all convincing. There’s no reason, however, to believe that “love” or “charity” is a basic fact of physics, since it’s fairly obvious how to reduce these. Do you think you can reduce qualia?
I don’t understand why you think this is a claim about my feelings.
Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?
Of course not!
and why not?
Because the neuron firing pattern is presumably the cause of the quale, it’s certainly not the quale itself.
I don’t understand what else is there.
Imagine a flashlight with a red piece of cellophane over it pointed at a wall. Scientists some day discover that the red dot on the wall is caused by the flashlight—it appears each and every time the flashlight fires and only when the flashlight is firing. However, the red dot on the wall is certainly not the same as the flashlight: one is a flashlight and one is a red dot.
The red dot, on the other hand, could be reduced to some sort of interaction between certain frequencies of light-waves and wall-atoms and so on. But it will certainly not get reduced to flashlights.
By the same token, you are not going to reduce the-subjective-experience-of-seeing-red to neurons; subjective experiences aren’t made out of neurons any more than red dots are made of flashlights.
Ok, that’s where we disagree. To me the subjective experience is the process in my brain and nothing else.
There’s no arguemnt there. Your point about qualia is illustrated by your point about flashlights, but not entailed by it.
How do you know this?
There’s no certainty either way.
Reduction is an explanatory process: a mere observed correlation does not qualify.
I think that anyone talking seriously about “qualia” is confused, in the same way that anyone talking seriously about “free will” is.
That is, they’re words people use to describe experiences as if they were objects or capabilities. Free will isn’t something you have, it’s something you feel. Same for “qualia”.
Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven’t covered that much of the sequences homework, it’s unlikely that you’ll find this discussion especially enlightening.
(More to the point, you’re doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)
This is probably a good answer to that question.
Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.
So you say. It is not standardly defined that way.
Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word “”qualia”
Well, would that mean writing a series like this?
My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?
Who said anything about our intuitions (except you, of course)?
You keep making statements like,
And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha’s physical reaction would ‘be’ a quale. So where do we go from there?
(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn’t connect it to anything else—no similarities, no differences, no links of any kind. Would you see anything?)
I guess you need to do some more thinking to straighten out your views on qualia.
Goodnight, Aaron Swartz.
downvoted posthumously.
Let’s back up for a second:
You’ve heard of functionalism, right? You’ve browsed the SEP entry?
Have you also read the mini-sequence I linked? In the grandparent I said “physical reaction” instead of “functional”, which seems like a mistake on my part, but I assumed you had some vague idea of where I’m coming from.
Or you do. You claim the truth of your claims is self-evident, yet it is not evident to, say, hairyfigment, or Eliezer, or me for that matter.
If I may ask, have you always held this belief, or do you recall being persuaded of it at some point? If so, what convinced you?
Could you expand on this point, please? It generally agreed* that “free will vs determinism” is a dilemma that we dissolved long ago. I can’t see what else you could mean by this, so …
[*EDIT: here, that is]
I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don’t see how point one holds (we experience it), and the argument obviously doesn’t go through.
But that’s not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I’ve seen tomatoes and tasted lemons.
But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.
It isn’t even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain’s concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn’t use the word “qualia”, although he often seems to be talking about the same thing).