Any man’s death diminishes me, because their agenthood and qualia are probabilistically similar to mine, and it would not have taken many counterfactual changes for me to be not at all unlike that man, compared to a babyeater, whose pain bothers me less. Thus altruism is found in the egoist, to an extent.
Getting from “other people’s minds are probably similar to mine” to “I care about other people’s minds” still requires some implicit premises or some psychological features beyond egoism (e.g. empathy).
That’s a better response to Will’s post than to the Donne quote. Donne only states that all humans influence each other’s existence to some minimal degree, butterfly-effect-style (I would object, incidentally, that such influence may not always be desirable); it’s Will that brings up the similarities between humans as his moral foundation.
Please don’t delete comments. It makes it hard to understand orphaned replies. Adding an [Edit: Withdrawn] at the end of the comment serves the same purpose, but maintains conversational continuity.
Are you unfamiliar with the term, or are you asking him to demonstrate that he understands it well enough to permit his using it as a quiet assumption in making a point?
I am familiar with the term, but I don’t seem to have any. And no one’s been able to tell me what they do, so I like to ask when they come up so that maybe someday I’ll find out.
I don’t believe in qualia as a real entity, but when people talk about them they’re referring to a genuine phenomenon which you also experience: that your conscious understanding of the experience of perception is only the merest shadow of the perception itself. Seeing red doesn’t mean seeing something with a little XML “red” tag attached, but something much more complicated that happens beyond your conscious introspection. You can imagine the state of having switched that “red” experience with the “green” experience, in all your memories as well as in current perception, and still instantly knowing that the switch had occurred. This phenomenon is not an illusion, just a blind spot of conscious knowledge which happens to confuse the hell out of naive philosophers.
Thank you, well said! I’ve seen people go so far in dissolving qualia that they think they have to deny their own conscious experience, or think the confusion is extinguished as soon as you have the terminology nailed down.
[Y]our conscious understanding of the experience of perception is only the merest shadow of the perception itself.
Of course. If I had perfect knowledge of my brain’s functioning, now that would be a very strange thing indeed.
You can imagine the state of having switched that “red” experience with the “green” experience, in all your memories as well as in current perception, and still instantly knowing that the switch had occurred.
No, I can’t. If all my memories had been altered to agree with my newly-altered perception system, what difference would I detect? How would I detect it? Different from what?
The hypothetical situation I mean is one where your current retina is reprogrammed to switch red and green stimuli, and your memories are edited so that you don’t figure it out from inconsistencies, but everything else is left the same.
The fact that there’s subconscious cognitive content to red vs. green can be deduced from things like instinctive reactions† to the sight of blood: the brain doesn’t check the color against the memory of other blood, it reacts faster than that to to perception. The emotional valence of colors would seem off somehow after a switch, because those don’t appear to operate fully through memory, either. Snap judgments of peoples’ attractiveness would backfire as your subconscious applied the rule “green tint means sickly” to someone with a healthy complexion.
I don’t think you’d be able to consciously articulate what exactly seemed “red” about that green grass, but parts of your mind would be telling you that something’s gone wrong, because they’re hooked up not just to labels “red” and “green” but to full systems of processing that would be running on suddenly different stimuli.
†Similarly, chimps raised by humans in captivity will still freak out when exposed to a fake snake, because certain patterns have been encoded deep within. There’s no reason for such patterns to be raised to the level of conscious knowledge.
Ahhh, so you’d only be reprogramming part of my brain. Well, of course I’d run into problems then. All that means is that there are more parts of my brain than those I have conscious access to, which seems pretty obvious to me even before I start to think about what I know of neurology.
I wouldn’t be sure, the vision system has an amazing ability to adept to rewiring.
Monkeys were able to see another color through gene therapy that their species hadn’t seen before.
Most philosophical definitions are pretty weird. On the rare occasions I use “qualia”, it means the inside view of your sense perceptions. What things look/sound/feel like to the person doing the sensing.
By experience I mean anything which we detect with one of our senses.
The subjective part is, IMHO, the key to qualia.
Suppose that you’ve never seen red light, and that you are then told all of its properties in perfect detail. You would still gain new information by actually seeing red light, because you still don’t know “what it feels like” to see it. The qualia is not the objective facts, but rather what seeing the light “feels like”, your perception of the effect produced on your brain by the light.
(Qualia are usually taken to be an argument against materialism- because after you know every objective fact about something, you still gain new information (qualia) by experiencing it.)
This “Mary’s Room” argument, like the “Chinese Room” argument†, contains a subtle sleight of hand.
On the one hand, for the learning to be about just the qualia rather than about externally observable features of vision processing, the subject would need to learn immensely more than the physical properties of red light. (The standard version of Mary’s Room does so, postulating Mary to also deeply understand her own visual cortex and the changes it would undergo upon being exposed to that color.) In fact, the depth of conscious theoretical understanding that this would require is far beyond any human being, and it’s wrong and silly to naively map our mind-states onto those of such a mind.
On the other hand, it plays on the everyday intuition that if I’ve never seen the color red, but have been given a short list of facts about it and am consciously representing my limited intuition for that set of facts, that doesn’t add up to the experience of seeing red.
The equivocation consists of thinking that a superhuman level of detailed understanding of (and capability to predict) the human brain can be analogized to that everyday intuition, rather than being unimaginably other to it. So I don’t see that an agent who was really possessed of that level of self-understanding would necessary feel that the actual experience added an ineffable otherness to what they already knew.
That sense of ineffable otherness, IMO, comes from the levels of detail in the mental processing of color which we don’t have conscious access to. Our conscious mind isn’t built to understand what we’re doing when we visually perceive, at the level that we actually do it—there’s no evolutionary need to communicate all the richness of color perception, so the conscious mind didn’t evolve to encompass it all. And this limitation of our conscious understanding feels to us like a thing we have which cannot in principle be reduced.
† The application of this same principle to the Chinese Room argument is a trivial exercise, left to the reader.
Intuitions don’t matter. If Mary can’t activate her neural pathways participating in creation of experience of seeing red, then she has no means of knowing how she will experience redness. All models she can create in her mind will be external to her as the mind created by actions of human being in Chinese room is external to that human being.
And this limitation of our conscious understanding feels to us like a thing we have which cannot in principle be reduced.
It is not only conscious understanding that is required, we will need a conscious control of individual neurons and synapses to be able to experience qualia given just a description of it. For example, to be able to name color and imagine color given its name, Mary (roughly speaking) should manually connect neurons in her visual cortex to the neurons in her Broca’s area and to the neurons in her auditory cortex.
So I think that, contrary to Dennet, Mary will get new information when she will see colors, as human’s brain construction doesn’t allow to acquire that information by other means. Thus in a sense human’s qualia cannot be reduced.
That contradicts one of the assumptions in the thought experiment. You’re establishing qualia as a physical property; in that case, “what it feels like to see red” is amongst the things Mary knows about, by hypothesis.
Also, if it just comes down to activating those neurons, then Mary knows that too and can perform an experiment to activate those neurons without having a ‘red thing’ in front of her, using her incredible superhuman intelligence and resources.
I am not establishing qualia as physical properties of brain’s activity, I think of them as descriptions of specific neural activity in the terms of human’s self-model. And limitations of that self-model (it’s not sufficiently detailed to refer to individual neurons) don’t allow to establish unambiguous correspondence between physical description of brain and self-model description of brain within that self-model.
Mary knows that too and can perform an experiment to activate those neurons without having a ‘red thing’ in front of her, using her incredible superhuman intelligence and resources.
And what is a difference between seeing red thing and activation of those neurons? The point of “Mary’s room” is to know what seeing red means without actually seeing it.
And what is a difference between seeing red thing and activation of those neurons? The point of “Mary’s room” is to know what seeing red means without actually seeing it.
Depends who’s using it. For Dennett, for instance, the point of Mary’s room is to point out how ridiculous this notion of qualia is, or at least how silly the thought experiment is.
As stated, she knows everything physical about red. So she knows, for instance, how to build a machine that will activate her red-seeing neurons in the absence of the color. Also as stated, she can perform whatever experiments she needs to in order to become an expert color scientist. So she can have whatever experience would come from having those neurons activated.
If you think there’s nothing else to the experience, then I think we’re in agreement so far.
In fact, the depth of understanding that this would require is far beyond any human being, and we really have no intuition for what it would be like to have it.
So we have no intuition for that understanding level’s qualia? ;-)
Well, of course a verbal description of red light is different from seeing red light. One is an auditory stimulus, and one is a visual stimulus. They do different things to my neurons. Are qualia about something other than neurons?
I voted up WrongBot’s redefinition of his question, ‘are qualia about something other than neurons?’. Is qualia something other than a word that has been awkwardly defined? Why is colour always the example used to illustrate qualia? Is there something different between colour and things like position, texture, pitch? Has anyone thought of a better or even different way to experience our experiences?
Readers may be interested in my approach to the problem, or rather, the problem that remains even after any terminology issues are settled.
Summary: What we identify as “qualia” is the encoding of memories that we cannot yet compare directly between people, to the extent we can’t compare them. This incommensurability can easily arise among agents who are similar, but who self-modify in a way that does not place any priority on the ability to directly transfer memories to other agents.
In that case, their methods of storing memories are ad-hoc, and look like garbage to each other—but with the right assumptions and interaction, they can achieve a limited ability to compare, and thereby have terminology like “red” that means something to all agents, even as it doesn’t call up exactly the same idea for each one.
I was intrigued when I first read this when you last posted it, and I thought about it for a while. The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn’t seem to be come any close to explaining what they are or how they arise.
So, I could imagine a world (it may even be this one!) where people’s brains happen to be organized similarly enough that two people really could transfer qualia between them, but this still doesn’t explain anything about them.
The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn’t seem to be come any close to explaining what they are or how they arise.
You’re right. But I believe that that the ineffable aspect is closely related to the other two questions, although I don’t have an answer in the same detail as the ineffability question (which would still be progress!).
To give a sketch of what I have in mind, my best explanation is this: conscious minds form when a subsystem is able to screen itself off from the entropizing forces of the environment (similar in kind to a refrigerator or other control system). This necessarily decouples it from the patterns that exist in the environment, as well as other minds that have done the same.
So the formation of a conscious mind will coincide with the formation of incompatible encoding methods, unless special care is taken to ensure that the encoding protcols are the same. Therefore, we shouldn’t be surprised to notice that, “hey, everything that’s conscious, also has ineffable experiences with the other conscious things.”
But again, I don’t claim this part is as well-developed or thought-out.
What is it that you feel/see/touch/taste/think/etc. instead of simply acting? Why is there a “you” you experience, instead of mere rote action? We label these sorts of things that we use to distinguish between empty existence and our own subjective (personally observed/felt) experience. The thing about humans that distinguish them from P-zombies.
What is it that you feel/see/touch/taste/think/etc. instead of simply acting?
Why do you group together sense perceptions (which I have) with thoughts (which I have), and call them qualia (which I don’t have)?
Why is there a “you” you experience, instead of mere rote action?
How are these different?
We label these sorts of things that we use to distinguish between empty existence and our own subjective (personally observed/felt) experience.
How can existence be “empty”? Is subjective experience just sense perception? Because sense perception doesn’t seem like it warrants all this mysteriousness.
The thing about humans that distinguish them from P-zombies.
That’s odd. I thought the sequence on P-zombies made it pretty clear that they don’t exist. Why do we need to be distinguished from confused, impossible thought experiments?
Perhaps you simply do not have qualia or subjective experience. Some people do not have visual mental imagery, strange though that may seem to those of us who do. Similarly, maybe some people do not have anything they are moved to describe as subjective experience. Such people, if they exist, are the opposite of the logically absurd p-zombies. P-zombies falsely claim that they do have these things; people without them truthfully claim that they do not.
You might just be Socratically role-playing, but even so, there may be other people who actually do not have these things. That is, they would express puzzlement at talk about “the redness of red”, “awareness of one’s own self”, and so forth (and without having been tutored into such puzzlement by philosophers arguing that they cannot be experiencing what in fact they do experience).
Is there anyone here who does experience that puzzlement, even before knowing anything of the philosophical controversy around the subject?
To the best of my knowledge — which isn’t saying much: I’m not well-read in philosophy — I am in a minority of one on the subject of free will.
The discussion is always: do we have it, or don’t we?
My considered view is that some of us have it while the rest don’t. Like perfect pitch.
I’m pretty sure I don’t have free will; but I’ve encountered people who I’m pretty sure do have it.
I see that as a cheap way out, I think “do I have free will ?” is just a confused question whose answer depends of the way you unconfuse it. I’m just in the minority of humans who refuses to answer that confused question—I’d like to say I refuse to answer all confused questions, but that’s probably not true.
Still, it is possible that confusion and disagreement about “qualia” and “free will” are just due to differences in personal experience, not to different interpretation of those labels.
People who lack visual mental imagery have atypical performance on certain kinds of cognitive tests, as Yvain’s article describes, and if I believed that such people existed, I would expect that testable difference. What type of test should I expect to distinguish between those who have qualia and those who do not?
But you’d have to use naive subjects who haven’t philosophised themselves into ignoring their own experience.
A little more indirectly, people without qualia would profess puzzlement at the very idea, and argue that there is no such thing. If they are philosophers, they will write articles on the incoherence of the concept. If they are psychologists, they will practice psychology on the basis that mental phenomena do not exist. If they are teachers, they will see the brain as a pot to be filled, not the mind as a fire to be ignited. Those who do have qualia will be as tenacious on the other side.
Nothing that those who do have qualia say about qualia will make sense to those who don’t, and those who don’t will have no difficulty in demonstrating that it is nonsense. Those who do have qualia will be unable to explain them even to each other, since they know no more about what they are than they know about how thought happens. All of their supposed explanations will only be disguised descriptions of what it feels like to have them.
The most direct test would be this: “Do you have qualia?”
How is that direct? First you’d have to explain what you mean by that, and “understanding” such an explanation would pretty much require convincing oneself that there are such things to be had in the first place.
There are some things you can test by asking; I can imagine asking someone, “do you ever get a twisty kind of feeling in your stomach or nearby, when you’ve just had something very bad happen to you and it slipped your mind for a while but then it intrudes again on your awareness—and the twisty feeling comes precisely at that moment”.
That’s a feeling. It’s describable. I have it sometimes. It’s an empirical matter whether other people recognize an experience of theirs in that description, or not. It’s much like pointing to a red thing and asking people “is this red”, and then they confirm that it’s red to them.
The most direct test would be this: “Do you have qualia?”
How is that direct? First you’d have to explain what you mean by that, and “understanding” such an explanation would pretty much require convincing oneself that there are such things to be had in the first place.
It does. If psychologists came out with a study that 1 out of 10 people don’t experience qualia, I would feel rather certain that I was one of those out of 10 that don’t experience it. Just like WrongBot, I think. However, my actual expectation is that we are all the same at that level of brain organization, and wonder what aspect of my experience people are labeling ‘qualia’.
Seeing red doesn’t mean seeing something with a little XML “red” tag attached,
Actually, this is exactly what I hypothesized qualia were: little reference tags of meaning that we attach to things we recognize.
When using an entirely new medium, I feel like I experience the creation of new qualia. For example, here on Less Wrong, each comment has a username. After some experience on Less Wrong, it feels like the username is different from (and more than) a set of green underlined letters in bold font at the upper right hand corner that tells you the person who wrote the comment—it’s like a separate object that means the source of the comment, and as soon as it has that extra meaning, it gains this elusive quale-like aspect.
However, my actual expectation is that we are all the same at that level of brain organization
I have an opposite hunch: that the further removed any part of our internal constitution is from the world outside our skins, the more we vary.
My reason is that there are many ways of doing the right thing to survive and reproduce. The genome isn’t big enough to contain a blueprint for a whole brain, so evolution has come up with a general mechanism (which no-one actually knows anything about yet) for the whole thing to organise itself when the newborn is dropped into an unknown environment. The organisation an individual brain ends up with is constrained by nothing more than the requirement to make the organism function in that environment.
Look around you at the variation in people’s personalities. They’re even more different inside their heads than that.
Any man’s death diminishes me, because their agenthood and qualia are probabilistically similar to mine, and it would not have taken many counterfactual changes for me to be not at all unlike that man, compared to a babyeater, whose pain bothers me less. Thus altruism is found in the egoist, to an extent.
Getting from “other people’s minds are probably similar to mine” to “I care about other people’s minds” still requires some implicit premises or some psychological features beyond egoism (e.g. empathy).
Specifically it requires “I care about myself” with saner boundaries around what counts as ‘self’ as the implicit premise.
Added: Any specific reason for the downvote?
Evolution (of genes and/or memes) is probably sufficient to generate this.
That’s a better response to Will’s post than to the Donne quote. Donne only states that all humans influence each other’s existence to some minimal degree, butterfly-effect-style (I would object, incidentally, that such influence may not always be desirable); it’s Will that brings up the similarities between humans as his moral foundation.
Indeed. But my comment was a reply to Will’s.
Mine eyes, they deceive me! Deleted.
Please don’t delete comments. It makes it hard to understand orphaned replies. Adding an [Edit: Withdrawn] at the end of the comment serves the same purpose, but maintains conversational continuity.
What are qualia?
Are you unfamiliar with the term, or are you asking him to demonstrate that he understands it well enough to permit his using it as a quiet assumption in making a point?
WrongBot has been around enough that one can safely assume that his ignorance is Socratic.
I am familiar with the term, but I don’t seem to have any. And no one’s been able to tell me what they do, so I like to ask when they come up so that maybe someday I’ll find out.
I don’t believe in qualia as a real entity, but when people talk about them they’re referring to a genuine phenomenon which you also experience: that your conscious understanding of the experience of perception is only the merest shadow of the perception itself. Seeing red doesn’t mean seeing something with a little XML “red” tag attached, but something much more complicated that happens beyond your conscious introspection. You can imagine the state of having switched that “red” experience with the “green” experience, in all your memories as well as in current perception, and still instantly knowing that the switch had occurred. This phenomenon is not an illusion, just a blind spot of conscious knowledge which happens to confuse the hell out of naive philosophers.
Thank you, well said! I’ve seen people go so far in dissolving qualia that they think they have to deny their own conscious experience, or think the confusion is extinguished as soon as you have the terminology nailed down.
Of course. If I had perfect knowledge of my brain’s functioning, now that would be a very strange thing indeed.
No, I can’t. If all my memories had been altered to agree with my newly-altered perception system, what difference would I detect? How would I detect it? Different from what?
The hypothetical situation I mean is one where your current retina is reprogrammed to switch red and green stimuli, and your memories are edited so that you don’t figure it out from inconsistencies, but everything else is left the same.
The fact that there’s subconscious cognitive content to red vs. green can be deduced from things like instinctive reactions† to the sight of blood: the brain doesn’t check the color against the memory of other blood, it reacts faster than that to to perception. The emotional valence of colors would seem off somehow after a switch, because those don’t appear to operate fully through memory, either. Snap judgments of peoples’ attractiveness would backfire as your subconscious applied the rule “green tint means sickly” to someone with a healthy complexion.
I don’t think you’d be able to consciously articulate what exactly seemed “red” about that green grass, but parts of your mind would be telling you that something’s gone wrong, because they’re hooked up not just to labels “red” and “green” but to full systems of processing that would be running on suddenly different stimuli.
†Similarly, chimps raised by humans in captivity will still freak out when exposed to a fake snake, because certain patterns have been encoded deep within. There’s no reason for such patterns to be raised to the level of conscious knowledge.
Ahhh, so you’d only be reprogramming part of my brain. Well, of course I’d run into problems then. All that means is that there are more parts of my brain than those I have conscious access to, which seems pretty obvious to me even before I start to think about what I know of neurology.
I think we agree with each other.
I wouldn’t be sure, the vision system has an amazing ability to adept to rewiring. Monkeys were able to see another color through gene therapy that their species hadn’t seen before.
Indeed there’s rewiring over time, but it wouldn’t be instant and it wouldn’t be total, so the point stands.
That’s a really interesting experiment—can you find me a link?
http://www.wired.com/wiredscience/2009/09/colortherapy/
Thanks!
Most philosophical definitions are pretty weird. On the rare occasions I use “qualia”, it means the inside view of your sense perceptions. What things look/sound/feel like to the person doing the sensing.
The subjective way we experience things.
What do you mean by “experience”? And “subjective”? I’m not sure what you’re talking about.
By experience I mean anything which we detect with one of our senses.
The subjective part is, IMHO, the key to qualia.
Suppose that you’ve never seen red light, and that you are then told all of its properties in perfect detail. You would still gain new information by actually seeing red light, because you still don’t know “what it feels like” to see it. The qualia is not the objective facts, but rather what seeing the light “feels like”, your perception of the effect produced on your brain by the light.
(Qualia are usually taken to be an argument against materialism- because after you know every objective fact about something, you still gain new information (qualia) by experiencing it.)
This “Mary’s Room” argument, like the “Chinese Room” argument†, contains a subtle sleight of hand.
On the one hand, for the learning to be about just the qualia rather than about externally observable features of vision processing, the subject would need to learn immensely more than the physical properties of red light. (The standard version of Mary’s Room does so, postulating Mary to also deeply understand her own visual cortex and the changes it would undergo upon being exposed to that color.) In fact, the depth of conscious theoretical understanding that this would require is far beyond any human being, and it’s wrong and silly to naively map our mind-states onto those of such a mind.
On the other hand, it plays on the everyday intuition that if I’ve never seen the color red, but have been given a short list of facts about it and am consciously representing my limited intuition for that set of facts, that doesn’t add up to the experience of seeing red.
The equivocation consists of thinking that a superhuman level of detailed understanding of (and capability to predict) the human brain can be analogized to that everyday intuition, rather than being unimaginably other to it. So I don’t see that an agent who was really possessed of that level of self-understanding would necessary feel that the actual experience added an ineffable otherness to what they already knew.
That sense of ineffable otherness, IMO, comes from the levels of detail in the mental processing of color which we don’t have conscious access to. Our conscious mind isn’t built to understand what we’re doing when we visually perceive, at the level that we actually do it—there’s no evolutionary need to communicate all the richness of color perception, so the conscious mind didn’t evolve to encompass it all. And this limitation of our conscious understanding feels to us like a thing we have which cannot in principle be reduced.
† The application of this same principle to the Chinese Room argument is a trivial exercise, left to the reader.
Intuitions don’t matter. If Mary can’t activate her neural pathways participating in creation of experience of seeing red, then she has no means of knowing how she will experience redness. All models she can create in her mind will be external to her as the mind created by actions of human being in Chinese room is external to that human being.
It is not only conscious understanding that is required, we will need a conscious control of individual neurons and synapses to be able to experience qualia given just a description of it. For example, to be able to name color and imagine color given its name, Mary (roughly speaking) should manually connect neurons in her visual cortex to the neurons in her Broca’s area and to the neurons in her auditory cortex.
So I think that, contrary to Dennet, Mary will get new information when she will see colors, as human’s brain construction doesn’t allow to acquire that information by other means. Thus in a sense human’s qualia cannot be reduced.
You may be interested in this paper which makes a similar argument.
Thanks. It is identical argument modulo my inability to make all reasoning and premises sufficiently transparent.
That contradicts one of the assumptions in the thought experiment. You’re establishing qualia as a physical property; in that case, “what it feels like to see red” is amongst the things Mary knows about, by hypothesis.
Also, if it just comes down to activating those neurons, then Mary knows that too and can perform an experiment to activate those neurons without having a ‘red thing’ in front of her, using her incredible superhuman intelligence and resources.
I am not establishing qualia as physical properties of brain’s activity, I think of them as descriptions of specific neural activity in the terms of human’s self-model. And limitations of that self-model (it’s not sufficiently detailed to refer to individual neurons) don’t allow to establish unambiguous correspondence between physical description of brain and self-model description of brain within that self-model.
And what is a difference between seeing red thing and activation of those neurons? The point of “Mary’s room” is to know what seeing red means without actually seeing it.
Depends who’s using it. For Dennett, for instance, the point of Mary’s room is to point out how ridiculous this notion of qualia is, or at least how silly the thought experiment is.
As stated, she knows everything physical about red. So she knows, for instance, how to build a machine that will activate her red-seeing neurons in the absence of the color. Also as stated, she can perform whatever experiments she needs to in order to become an expert color scientist. So she can have whatever experience would come from having those neurons activated.
If you think there’s nothing else to the experience, then I think we’re in agreement so far.
So we have no intuition for that understanding level’s qualia? ;-)
Yeah, I realized the unintended recursion there, and have edited accordingly...
Well, of course a verbal description of red light is different from seeing red light. One is an auditory stimulus, and one is a visual stimulus. They do different things to my neurons. Are qualia about something other than neurons?
I voted up WrongBot’s redefinition of his question, ‘are qualia about something other than neurons?’. Is qualia something other than a word that has been awkwardly defined? Why is colour always the example used to illustrate qualia? Is there something different between colour and things like position, texture, pitch? Has anyone thought of a better or even different way to experience our experiences?
Readers may be interested in my approach to the problem, or rather, the problem that remains even after any terminology issues are settled.
Summary: What we identify as “qualia” is the encoding of memories that we cannot yet compare directly between people, to the extent we can’t compare them. This incommensurability can easily arise among agents who are similar, but who self-modify in a way that does not place any priority on the ability to directly transfer memories to other agents.
In that case, their methods of storing memories are ad-hoc, and look like garbage to each other—but with the right assumptions and interaction, they can achieve a limited ability to compare, and thereby have terminology like “red” that means something to all agents, even as it doesn’t call up exactly the same idea for each one.
I was intrigued when I first read this when you last posted it, and I thought about it for a while. The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn’t seem to be come any close to explaining what they are or how they arise.
So, I could imagine a world (it may even be this one!) where people’s brains happen to be organized similarly enough that two people really could transfer qualia between them, but this still doesn’t explain anything about them.
You’re right. But I believe that that the ineffable aspect is closely related to the other two questions, although I don’t have an answer in the same detail as the ineffability question (which would still be progress!).
To give a sketch of what I have in mind, my best explanation is this: conscious minds form when a subsystem is able to screen itself off from the entropizing forces of the environment (similar in kind to a refrigerator or other control system). This necessarily decouples it from the patterns that exist in the environment, as well as other minds that have done the same.
So the formation of a conscious mind will coincide with the formation of incompatible encoding methods, unless special care is taken to ensure that the encoding protcols are the same. Therefore, we shouldn’t be surprised to notice that, “hey, everything that’s conscious, also has ineffable experiences with the other conscious things.”
But again, I don’t claim this part is as well-developed or thought-out.
What is it that you feel/see/touch/taste/think/etc. instead of simply acting? Why is there a “you” you experience, instead of mere rote action? We label these sorts of things that we use to distinguish between empty existence and our own subjective (personally observed/felt) experience. The thing about humans that distinguish them from P-zombies.
Why do you group together sense perceptions (which I have) with thoughts (which I have), and call them qualia (which I don’t have)?
How are these different?
How can existence be “empty”? Is subjective experience just sense perception? Because sense perception doesn’t seem like it warrants all this mysteriousness.
That’s odd. I thought the sequence on P-zombies made it pretty clear that they don’t exist. Why do we need to be distinguished from confused, impossible thought experiments?
Perhaps you simply do not have qualia or subjective experience. Some people do not have visual mental imagery, strange though that may seem to those of us who do. Similarly, maybe some people do not have anything they are moved to describe as subjective experience. Such people, if they exist, are the opposite of the logically absurd p-zombies. P-zombies falsely claim that they do have these things; people without them truthfully claim that they do not.
You might just be Socratically role-playing, but even so, there may be other people who actually do not have these things. That is, they would express puzzlement at talk about “the redness of red”, “awareness of one’s own self”, and so forth (and without having been tutored into such puzzlement by philosophers arguing that they cannot be experiencing what in fact they do experience).
Is there anyone here who does experience that puzzlement, even before knowing anything of the philosophical controversy around the subject?
There is this example:
I see that as a cheap way out, I think “do I have free will ?” is just a confused question whose answer depends of the way you unconfuse it. I’m just in the minority of humans who refuses to answer that confused question—I’d like to say I refuse to answer all confused questions, but that’s probably not true.
Still, it is possible that confusion and disagreement about “qualia” and “free will” are just due to differences in personal experience, not to different interpretation of those labels.
People who lack visual mental imagery have atypical performance on certain kinds of cognitive tests, as Yvain’s article describes, and if I believed that such people existed, I would expect that testable difference. What type of test should I expect to distinguish between those who have qualia and those who do not?
The most direct test would be this:
“Do you have qualia?”
Yes
No
But you’d have to use naive subjects who haven’t philosophised themselves into ignoring their own experience.
A little more indirectly, people without qualia would profess puzzlement at the very idea, and argue that there is no such thing. If they are philosophers, they will write articles on the incoherence of the concept. If they are psychologists, they will practice psychology on the basis that mental phenomena do not exist. If they are teachers, they will see the brain as a pot to be filled, not the mind as a fire to be ignited. Those who do have qualia will be as tenacious on the other side.
Nothing that those who do have qualia say about qualia will make sense to those who don’t, and those who don’t will have no difficulty in demonstrating that it is nonsense. Those who do have qualia will be unable to explain them even to each other, since they know no more about what they are than they know about how thought happens. All of their supposed explanations will only be disguised descriptions of what it feels like to have them.
Looks pretty much like our world, doesn’t it?
How is that direct? First you’d have to explain what you mean by that, and “understanding” such an explanation would pretty much require convincing oneself that there are such things to be had in the first place.
There are some things you can test by asking; I can imagine asking someone, “do you ever get a twisty kind of feeling in your stomach or nearby, when you’ve just had something very bad happen to you and it slipped your mind for a while but then it intrudes again on your awareness—and the twisty feeling comes precisely at that moment”.
That’s a feeling. It’s describable. I have it sometimes. It’s an empirical matter whether other people recognize an experience of theirs in that description, or not. It’s much like pointing to a red thing and asking people “is this red”, and then they confirm that it’s red to them.
How are “qualia” different?
Failing to understand would amount to a “No”.
It does. If psychologists came out with a study that 1 out of 10 people don’t experience qualia, I would feel rather certain that I was one of those out of 10 that don’t experience it. Just like WrongBot, I think. However, my actual expectation is that we are all the same at that level of brain organization, and wonder what aspect of my experience people are labeling ‘qualia’.
Above, Orthonormal wrote,
Actually, this is exactly what I hypothesized qualia were: little reference tags of meaning that we attach to things we recognize.
When using an entirely new medium, I feel like I experience the creation of new qualia. For example, here on Less Wrong, each comment has a username. After some experience on Less Wrong, it feels like the username is different from (and more than) a set of green underlined letters in bold font at the upper right hand corner that tells you the person who wrote the comment—it’s like a separate object that means the source of the comment, and as soon as it has that extra meaning, it gains this elusive quale-like aspect.
I have an opposite hunch: that the further removed any part of our internal constitution is from the world outside our skins, the more we vary.
My reason is that there are many ways of doing the right thing to survive and reproduce. The genome isn’t big enough to contain a blueprint for a whole brain, so evolution has come up with a general mechanism (which no-one actually knows anything about yet) for the whole thing to organise itself when the newborn is dropped into an unknown environment. The organisation an individual brain ends up with is constrained by nothing more than the requirement to make the organism function in that environment.
Look around you at the variation in people’s personalities. They’re even more different inside their heads than that.