I don’t see what problems reductionism poses for qualia.
I’ve never gotten this either. It has always seemed to me that qualia exist, and that they can fully be explained by reductionism and physicalism (presumably as some sort of function of our nervous system interacting with stimuli). There are apparently some people who have a strong intuition that they can’t be explained in such a fashion, but I do not share this intuition.
(At his blog Eric S. Raymond wrote an article arguing that qualia are probably the sensation one feels when one’s stimuli processing systems light up, and that attempting to eliminate them is silly).
“I don’t see what problems reductionism poses for qualia.”
“I’ve never gotten this either.”
I think I may write a sequence about this. I’ve noticed that there are a lot more LW posts trying to solve the Hard Problem (or insisting that it’s a pseudo-problem) than trying to explain what ‘Hard Problem’ means in the first place, or trying to state it precisely.
Thus I see a lot of people insisting that the Hard Problem either isn’t a problem, or isn’t hard, without investing any time into steel-manning (or even reading) the Other Side. Eliezer, actually, is one of the few LWers I’ve seen who generally grants that it’s both hard and a problem.
A sequence that spent more time trying to figure out what the problem is, and what methodology is appropriate for such a strange topic, might also be more domain-generally useful than one that leaps straight into picking the best solutions (or mocking the worst),
There are apparently some people who have a strong intuition that they can’t be explained in such a fashion, but I do not share this intuition.
Do you understand exactly why they have the intuition, and what their intuition amounts to?
It seems to me that attempting to eliminate qualia is a repeat of the comedy of behaviorism. “All these mystical people claim that qualia can’t be explained by physics, so I’ll say qualia don’t exist at all! That’ll show ’em!”
That may be true for eliminativists who are behaviorists, like perhaps Dennett. But it’s not true for eliminativists who acknowledge that introspective evidence is admissible evidence, and just deny that the evidence for qualia outweighs the evidence for the conjunction ‘physicalism is true, and phenomenal reductionism is false’.
If you can’t regenerate the reasons people disagree with you—if you’re still at the stage where the opposing side purely sounds like a silly caricature, with no coherent supporting arguments—then you should have low confidence that you know their positions’ strong and weak points.
There’s actually one in that essay I linked to at the end of my post. Here is the most relevant paragraph (discussing the Mary’s Room problem):
Here is my physicalist account of Mary’s “Wow!” What she learns is what it feels like to have the color-processing pathways of her brain light up. This is an objective fact about her subjectivity; with a sufficiently good MRI we could actually see the difference in patterns of occipital-lobe activity. And that will probably be a world-changing experience for Mary, fully worthy of a “Wow!”, even if we concede the Mary’s-Room premise that she has not learned anything about the world outside her own skull.
Reading Wikipedia’s entry on qualia, it seems to me that most of the arguments that qualia can’t be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way. Anyone with a basic knowledge of physiology knows the idea you can give someone the powers of Spider-Man or Aquaman without changing their physical appearance or internal anatomy is silly. Modern superhero writers have actually been forced to acknowledge this by occasionally referencing ways that such characters are physically different from humans (in ways that don’t cosmetically affect them, of course).
But because qualia are a property of our brain’s interaction with external stimuli, rather than a property of our bodies, the idea that you could change someone’s qualia without changing their brain or the external world fails to pass our nonsense detector. If I wake up and the spectrum is inverted, something is wrong with my brain, or something is wrong with the world.
That isn’t a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into
smaller component parts. In fact, it doens;t do much more than say subjectivity exists, and occurs in sync with brain states. As such, it is compatible with dualism.
Reading Wikipedia’s entry on qualia, it seems to me that most of the arguments that qualia can’t be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way.
You mean p-zombie arguments?
But because qualia are a property of our brain’s interaction with external stimuli, rather than a property of our bodies, the idea that you could change someone’s qualia without changing their brain or the external world fails to pass our nonsense detector.
Whatever,tThat doesn;t actuall provide an explanation of qualia.
That isn’t a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into smaller component parts.
I presume that would be “Mary’s qualia are caused by the feeling the color-processing pathways of her brain light up. The color processing parts are made of neurons, which are made of molecules, which are made of atoms. Those parts of the brain are then connected to another part of the brain by more neurons, which are similarly composed. When those color processing parts fire this causes the connecting neurons to fire in a certain pattern. These patterns of firings are what her feelings are made of. Feelings are made out of firing neurons, which are in turn made out of atoms.”
As such, it is compatible with dualism.
I don’t get the appeal of dualism. Qualia can’t run on machines made out of atoms and quarks, but there is some other mysterious substance that composes our mind, and qualia can run on machines made out of this substance? Why the extra step? Why not assume that atoms and quarks are the substrate that qualia run on? What hypothetical special properties does this substance have that let qualia run on it, but not on atoms?
I’m sure that if we ever did discover some sort of disembodied soul made out of a weird previously unknown substance that was attached to the brain and appeared to contain our consciousness, Dave Chalmers would argue that qualia couldn’t possibly be reduced down to something as basic as [newly discovered substance], and that obviously this disembodied soul couldn’t possibly contain consciousness, that has to be contained somewhere else. There is no possible substance, no possible anything, that could ever satisfy the dualist’s intuitions.
You mean p-zombie arguments?
Yes, plus the inverted spectrum argument, and all the other “conceivability arguments.” I can conceive of myself walking on walls, bench-pressing semi-trucks, and flying without making any modifications to my body or changing the external world. But that’s because my brain is bad at conceiving stuff and fudges using shortcuts. If I actually start thinking in extremely detailed terms of my muscle tissues and the laws of physics, it becomes obvious that you can’t conceive of such a thing.
If anyone argued “I can imagine an anorexic person with almost no muscles lifting a truck, therefore strength cannot be caused by one’s muscles,” they would be laughed at. P-zombies and inverted spectrums deserve similar ridicule.
Can you explain why red is produced and not soemthing other.
There are many different neuron firing patterns. Some produce various shades of red, other produce other stuff.
I find the Mary argument more convincing.
The intuition that Mary’s Room activates is that no amount of book-learning can substitute for firsthand experience. This is because we can’t always use knowledge we obtain from reading about experiences to activate the same neurons that having those experiences would activate. The only way to activate them and experience those feeling is to have the activating experience.
Now, in Dennett’s RoboMary variation of the experience, RoboMary would probably not say “Wow!” That is because she is capable of constructing a brain emulator of herself seeing red inside her own head, and then transferring the knowledge of what those neurons (or circuits in this case) felt when activated. She already knows what seeing red feels like, even though she’s never seen it.
The dualist says: ‘I imagine Mary the color-blind learning all the scientific facts about color vision, including the fine neurological details, and correctly drawing any relevant inferences from these facts. Yet when I imagine Mary seeing red for herself for the first time, it seems to me that she would think that further epistemically open possibilities have been ruled out, that were previously open. There seemed to be more than one candidate subjective character red-detecting brain states could add up to, and learning “oh, that’s what red feels like!” narrowed down the model further.’
Since Mary is color-omniscient, some explanation then is needed for why she would harbor this false belief, or for why she wouldn’t really think that the first-hand experience had further narrowed down the experiential possibilities for her.
Saying ‘she hadn’t instantiated the property X’ doesn’t explain why anyone has this intuition, because in nearly all cases it’s possible to understand and expect properties without instantiating them oneself. If Mary were a volcanologist, there wouldn’t be some factual information she’s missing by virtue of not having her brain instantiate all the properties of a volcano. What is it about certain mental properties that makes them relevantly different?
Since Mary is color-omniscient, some explanation then is needed for why she would harbor this false belief, or for why she wouldn’t really think that the first-hand experience had further narrowed down the experiential possibilities for her.
Mary isn’t really color omniscient. The thought experiment has the hidden false assumption that human beings can learn all types of knowledge by study alone. Since we all know this isn’t true, when we hear “Mary knows everything about color” our brain translate that into “Mary knows everything about color that one can learn by studying.” Our intuitions about whether she’ll say “Wow” or not are based on this translation.
The ability to recognize red objects is like the skill of riding a bicycle—it can only be acquired by doing it, not by study, because study can only train the linguistic centers of the brain, not the visual processing centers
In other words, Mary can’t figure out what qualia feel like because she is using the “linguistic” program and needs to use the “visual processing one.” It’s like trying to do a slideshow using Notepad instead of Powerpoint.
What is it about certain mental properties that makes them relevantly different?
Because human brains are much more complicated than volcanoes. Humans are only capable of assimilating so much prepositional knowledge and we are severely limited in our ability to convert it into other types of knowledge. orthonormal makes this point when explaining qualia.
Now, you could make Mary a superhuman creature that can assimilate vast amounts of knowledge and control and restructure her brain any way she wants. But if this assumption is made explicit my intuition that she would say “Wow!” when she goes outside disappears. A superhuman creature like that probably could figure out how seeing red felt without ever seeing it.
That’s another major problem with Mary’s Room. It posits a superhuman creature, capable of feats of learning and knowledge no human can achieve, but downplays that fact so that our intuitions are still conditioned to act like Mary is human.
Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn’t change that fact. If you’re saying there are some physical facts it’s physically possible for any agent to be able to figure out scientifically in principle, then you’ll need to explain why.
The ability to recognize red objects is like the skill of riding a bicycle—it can only be acquired by doing it, not by study, because study can only train the linguistic centers of the brain, not the visual processing centers
But the intuition isn’t that Mary would acquire the ability to recognize red objects for the first time. It’s that she’d learn new facts about what redness feels like. Consider the Marianna variant:
“Like Mary, Marianna first (at t1) lives in a black and white environment. Contrary to Mary (at a later moment t2) she gets acquainted with colors by seeing arbitrarily colored objects (abstract paintings, red chairs, blue tables, etc. but no yellow bananas, no pictures of landscapes with a blue sky etc.). Marianna is therefore unable to relate the kinds of color experiences she now is acquainted with to what she already knew about them at t1. At t2, Marianna may wonder which of four slides (a red, a blue, a green and a yellow slide) appears to her in the color normal people experience when looking at the cloudless sky. At t2 Marianna knows, in a sense, what it is like to have experiences of red, blue, etc. But she still lacks the relevant items of knowledge about what other people experience: there is a clear sense in which she still may not know that the sky appears blue to normal perceivers, she may even have the false believe that it appears to normal perceivers like the red slide appears to her and thus believe, in a sense, that the sky appears red to normal perceivers. Only at t3, when Marianna is finally released and sees the sky, does she gain this item of knowledge.”
Mary can’t figure out what qualia feel like because she is using the “linguistic” program and needs to use the “visual processing one.”
Everything about volcanoes can be translated into a linguistic program, without information loss. Why can’t everything about visual processing by translated into a linguistic program without loss? If it’s merely a matter of qualia being complicated, then shouldn’t all other complicated systems yield relevantly identical Hard Problem intuitions? E.g., shouldn’t the planet Mars appear irreducible and ineffable and unphysical?
A superhuman creature like that probably could figure out how seeing red felt without ever seeing it.
My intuition is that making Mary superhuman doesn’t change that experiencing red seems to narrow down the possibilities for her. Analogously, a superhuman wouldn’t be able to scientifically narrow down the possibilities for what it’s like to be a bat to a single model, without generating bat-experiences of its own. Can you explain why this intuition persists for me, when (as far as I can tell) it doesn’t for any other complex system?
It posits a superhuman creature, capable of feats of learning and knowledge no human can achieve, but downplays that fact so that our intuitions are still conditioned to act like Mary is human.
Maybe, but in that case the challenge is to explain, at least schematically, what superhuman power Mary obtains that lets her solve the Hard Problem. Mere increased processing power alone doesn’t seem to dissolve the problem.
Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn’t change that fact.
It does if our inability to simulate her well messes with our intuitions. If, as I conjectured, we tend to translate “omniscient person” with “scholar with lots of book-learning” then our intuitions will reflect that, and will hence be wrong.
Consider the Marianna variant.....But she still lacks the relevant items of knowledge about what other people experience
Is Marianna omniscient about light and neuroscience like Mary? If she is, she’d be able to figure out which color is which fairly easily.
If it’s merely a matter of qualia being complicated, then shouldn’t all other complicated systems yield relevantly identical Hard Problem intuitions?
It’s not just a matter of qualia being complicated, it’s a matter of the human brain being bad at communicating certain things, of which qualia are only one thing of many. And this isn’t just an issue of processing power and the complexity of something being processed, it’s an issue of software problems. There are certain problems we have trouble processing regardless of what level of power we have, because of our mind’s internal architecture. Wei Dei puts it well when he says:
...a quale is like a handle to a kernel object in programming. Subconscious brain corresponds to the OS kernel, and conscious brain corresponds to user-space. When you see red, you get a handle to a “redness” object, which you can perform certain queries and operations on, such as “does this make me feel hot or cold”, or “how similar is this color to this other color” but you can’t directly access the underlying data structure. Nor can the conscious brain cause the redness object to be serialized into a description that can be deserialized in another brain to recreate the object. Nor can Mary instantiate a redness object in her brain by studying neuroscience.
Furthermore, there are in fact other things that humans have a lot of difficulty communicating besides qualia. For instance, it’s common knowledge that people with a few days of job experience are much better at doing jobs than people who have spent months reading about the job.
My intuition is that making Mary superhuman doesn’t change that experiencing red seems to narrow down the possibilities for her.
I disagree. If Mary was a superhuman she could study what functions of the brain cause us to experience “qualia,” and then study the memories these processes generated. She could then generate such memories in her own brain, giving her the knowledge of what qualia feel like without ever experiencing them. She would see red and not be surprised at all.
If qualia were not a physical part of the brain, duplicating the memories of someone who had experienced them would not have this effect. However, I think it very likely that doing so would have this effect.
Can you explain why this intuition persists for me, when (as far as I can tell) it doesn’t for any other complex system?
Because, as I said before, our emotions are “black boxes” that humans are very bad at understanding and explaining. Their Kolmogorov complexity is extraordinarily high, but we feel like they are simple because of our familiarity with them.
Maybe, but in that case the challenge is to explain, at least schematically, what superhuman power Mary obtains that lets her solve the Hard Problem.
I think the ability to study and modify her own source code and memory, as well as the source code and memory of others is probably all she’d need, but I could be wrong.
“My intuition is that making Mary superhuman doesn’t change that experiencing red seems to narrow down the possibilities for her.”
“I disagree.”
You… disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I’m misunderstanding or misrepresenting my own intuitions?
I’m reporting on psychological data about what my intuitions are indicating to me. I’m not a dualist, so I’m not (yet) making any assertions about what Mary would actually do or say or know. I’m explaining what output my simulator is giving me when I run the thought experiment.
If Mary was a superhuman she could study what functions of the brain cause us to experience “qualia,” and then study the memories these processes generated. She could then generate such memories in her own brain, giving her the knowledge of what qualia feel like without ever experiencing them.
You’re assuming that all superhumans intelligent enough to understand the biophysics of color vision will also necessarily have a module that allows them to self-modify in a way that they have whatever first-person subjective experience they wish. There’s no reason to assume that. As long as a Mary without this capacity (but with the third-person biophysics-comprehending capacity) is possible, the argument goes through. The fact that a Mary that can spontaneously generate its own experience of redness is also possible doesn’t make any progress toward refuting or dissolving the Mary hunch.
It sounds to me like you’ve been reading too much Dennett. Dennett is not a careful or patient dissector of the Hard Problem. The entire RoboMary paper, for instance, is a non sequitur in relation to the arguments it’s meant to refute. It’s fun and interesting, but it’s talking about a different subject matter.
If qualia were not a physical part of the brain, duplicating the memories of someone who had experienced them would not have this effect.
That’s not true at all. Most forms of dualism allow Mary to generate the relevant mental states by manipulating the physical states they are causally tied to.
“Can you explain why this intuition persists for me, when (as far as I can tell) it doesn’t for any other complex system?”
“Because, as I said before, our emotions are ‘black boxes’ that humans are very bad at understanding and explaining.”
This doesn’t look to me like an explanation yet, even an outline of one. In fact, it looks like an appeal to the Black Box black box: ‘Black box’ is being used as a special word meant to pick out some uniquely important and effective category of Unknown Thingie. But just saying ‘we don’t understand emotions yet’ doesn’t tell me anything about why emotions appear irreducible to me, while other things I don’t understand do seem reducible to me.
Their Kolmogorov complexity is extraordinarily high, but we feel like they are simple because of our familiarity with them.
I don’t feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations ‘qualia seem irreducible because we don’t know enough about them yet’ and ‘qualia seem irreducible because we don’t realize how complicated they are’. But neither of these explanations makes me any less confused, and they’re both incredibly vague. I think this is a legitimate place to insist that we say not “complexity”.
I think the ability to study and modify her own source code and memory, as well as the source code and memory of others is probably all she’d need,
Why, specifically, would any of those four abilities help? Are all four needed? Are some more important than others? Why, for instance, wouldn’t just studying my own source code and memory (without being able to do radical surgery on it) suffice for knowing the phenomenal character of redness, or the phenomenal character of a bat’s echolocation...?
You… disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I’m misunderstanding or misrepresenting my own intuitions?
I mean my intuition is different.
I don’t feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations ‘qualia seem irreducible because we don’t know enough about them yet’ and ‘qualia seem irreducible because we don’t realize how complicated they are’.
Alright, I’ll try to stop hopping and nail down what I’m saying:
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
In addition to making qualia seem irreducible, this phenomenon explains other things, such as the fact that many activities are easier to learn to do by experience.
I’ve never actually read any Denett, except for short summaries of some of his criticisms written by other people. One person who has influenced me a lot is Thomas Sowell, who frequently argues that the most important knowledge is implicit and extremely difficult, if not impossible, to articulate into verbal form. He does this in terms of economics, but when I started reading about the ineffability of qualia I immediately began to think “This probably has a similar explanation.”
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
Mary isn’t a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.
If you’re asserting that Mary does not have the software problem that makes it impossible to derive “experential knowledge” from verbal data, then the answer to the puzzle is “Yes, Mary does know what red looks like, and won’t be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren’t capable of imagining that kind of architecture.”
Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn’t mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn’t have access to the OS API. “Intelligence” of the program isn’t a factor (this is how we’re able to run high-speed javascript in browsers without every JS program being a severe security risk).
“Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn’t change that fact.”
“It does if our inability to simulate her well messes with our intuitions. If, as I conjectured, we tend to translate ‘omniscient person’ with ‘scholar with lots of book-learning’ then our intuitions will reflect that, and will hence be wrong.”
‘Ex hypothesi’ here means ‘by stipulation’ or ‘by the terms of the conditional argument’. The assumption is ‘Mary is a color scientist who knows all the relevant facts about color vision, but has never experienced color in her own visual field’. You aren’t denying that this is a consistent, coherent hypothetical. All you’re suggesting is that a being that satisfied this hypothetical would have transhuman or posthuman capacities for data storage and manipulation. So far so good.
You then insist that such a being, if it acquired color vision, would be completely unsurprised by the particular shade of red it now (for the first time) encounters; whereas the dualist insists in that situation the transhuman would learn a new fact, would acquire new, possibilities-ruling-out information. (A sentient supercomputer without the capacity to experience color would run into the exact same trouble.)
Up to that point, the two of you remain in a stalemate. (Or worse than a stalemate, from your perspective, since you find it baffling that anyone could share the dualist’s intuitions or reasoning, whereas the dualist perfectly well understands the intuitive force of your argument, and just doesn’t think it’s strong enough.)
Is Marianna omniscient about light and neuroscience like Mary? If she is, she’d be able to figure out which color is which fairly easily.
So you assert. The goal here isn’t to just repeatedly assert, in various permutations, that dualists are wrong. The goal is to figure out why they think as they do, so we can dissolve the question. Swap out ‘free will’ for ‘irreducible qualia’ in Eliezer’s recommendation:
“It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. [...]
“The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers [...] But once you understand in detail how your brain generates the feeling of the question [...] then you’re done. Then there’s no lingering feeling of confusion, no vague sense of dissatisfaction.
“If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind.
“A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
“You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you’ve left anything unexplained.
’And so, perhaps, you’ll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.”
If you keep rushing again and again to swiftly solve the problem—or, worse, rushing again to affirm that the problem is solved—then it will be harder to notice the points that cause you confusion. My appeal to the Marianna example is a key example of a place that should have made you stop, furrow your brow, and notice that the explanation you gave before to dispell Mary-intuitions doesn’t work for Marianna-intuitions, even though the two seem to be of the same kind. It would be surprising indeed if ‘Mary lacked the ability to visualize redness’ were a big part of the explanation in the former case, yet not in the least bit a part of the latter case, given their obvious parallelism. This suggests that the explanation you first gave is off-base in the Mary case too. Retreating to just asserting that dualism is wrong is missing the important tidal shift that just happened.
There are certain problems we have trouble processing regardless of what level of power we have, because of our mind’s internal architecture.
OK. But ‘Qualia seem irreducible because something about how our brains work makes them seem irreducible’ isn’t the most satisfying of explanations. Could you give a little more detail?
When you see red, you get a handle to a “redness” object, which you can perform certain queries and operations on, such as “does this make me feel hot or cold”, or “how similar is this color to this other color” but you can’t directly access the underlying data structure. [...] Nor can Mary instantiate a redness object in her brain by studying neuroscience.
OK. But couldn’t all of the same be said of ordinary macroscopic objects in our environment, too? When I see a table (a physical table in my environment—not a table-shaped quale in my visual field), I can’t directly access the underlying fine-grained quantum description of the table. Nor can I make tables spontaneously appear in my environment by acquiring superhuman knowledge of the physics of tables. Yet tables don’t seem to pose any problem at all for reductionism.
If tables and qualia have all these things in common, then where does the actual difference lie, the difference that explains why there seems to be a Hard Problem in one case and not in the other?
it’s common knowledge that people with a few days of job experience are much better at doing jobs than people who have spent months reading about the job.
But is that because people who only learn about jobs indirectly are lacking certain key pieces of factual knowledge? The problem raised by Mary’s Room isn’t ‘Explain why Mary intuitively seems to get better at completing various tasks’; it’s ‘Explain why Mary intuitively seems to learn new factual knowledge’. This is made clearer by the Marianna example. Your analogy only helps us give a physicalistic explanation of the former, not the latter.
She could then generate such memories in her own brain,
Mary is a super-sceintist in tersm of intelligence and memory, but doesn’t have special abilities to rewire her own cortex. Internally gerneating Red is a cheat, like pricking her thumb to observe the blood.
She isn’t generating Red, she’s generating a memory of the feeling Red generates without generating Red. She now knows what emotional state Red would make her feel, but hasn’t actually made herself see red. So when she goes outside she doesn’t say “Wow” she says “Oh, those feelings again, just as I suspected.”
So she’s bound and gagged, with no ability to use her knowledge? Seems implausible, but OK. (Did she get this knowledge by dictation, or by magically reaching out to the Aristotelian essences of neurons?)
In any case, at least two of us have linked to orthonormal’s mini-sequence on the matter. Those three posts seem much better than ESR’s attempt at the quest.
So she’s bound and gagged, with no ability to use her knowledge?
If by “using her knowledge” you mean performing neurosurgery in herself, I have to repeat that that is a cheat.Otherwise, I ha e to point put that knowledge of, eg. phontosynthesis, doesn’t cause photosynthesis.
I’ve never gotten this either. It has always seemed to me that qualia exist, and that they can fully be explained by reductionism and physicalism (presumably as some sort of function of our nervous system interacting with stimuli). There are apparently some people who have a strong intuition that they can’t be explained in such a fashion, but I do not share this intuition.
It seems to me that attempting to eliminate qualia is a repeat of the comedy of behaviorism. “All these mystical people claim that qualia can’t be explained by physics, so I’ll say qualia don’t exist at all! That’ll show ’em!”
(At his blog Eric S. Raymond wrote an article arguing that qualia are probably the sensation one feels when one’s stimuli processing systems light up, and that attempting to eliminate them is silly).
I think I may write a sequence about this. I’ve noticed that there are a lot more LW posts trying to solve the Hard Problem (or insisting that it’s a pseudo-problem) than trying to explain what ‘Hard Problem’ means in the first place, or trying to state it precisely.
Thus I see a lot of people insisting that the Hard Problem either isn’t a problem, or isn’t hard, without investing any time into steel-manning (or even reading) the Other Side. Eliezer, actually, is one of the few LWers I’ve seen who generally grants that it’s both hard and a problem.
A sequence that spent more time trying to figure out what the problem is, and what methodology is appropriate for such a strange topic, might also be more domain-generally useful than one that leaps straight into picking the best solutions (or mocking the worst),
Do you understand exactly why they have the intuition, and what their intuition amounts to?
That may be true for eliminativists who are behaviorists, like perhaps Dennett. But it’s not true for eliminativists who acknowledge that introspective evidence is admissible evidence, and just deny that the evidence for qualia outweighs the evidence for the conjunction ‘physicalism is true, and phenomenal reductionism is false’.
If you can’t regenerate the reasons people disagree with you—if you’re still at the stage where the opposing side purely sounds like a silly caricature, with no coherent supporting arguments—then you should have low confidence that you know their positions’ strong and weak points.
Can you point me to such an explanation??
There’s actually one in that essay I linked to at the end of my post. Here is the most relevant paragraph (discussing the Mary’s Room problem):
Reading Wikipedia’s entry on qualia, it seems to me that most of the arguments that qualia can’t be explained by reductionism are powered by the same intuition that makes us think that you can give someone superpowers without changing them in any other way. Anyone with a basic knowledge of physiology knows the idea you can give someone the powers of Spider-Man or Aquaman without changing their physical appearance or internal anatomy is silly. Modern superhero writers have actually been forced to acknowledge this by occasionally referencing ways that such characters are physically different from humans (in ways that don’t cosmetically affect them, of course).
But because qualia are a property of our brain’s interaction with external stimuli, rather than a property of our bodies, the idea that you could change someone’s qualia without changing their brain or the external world fails to pass our nonsense detector. If I wake up and the spectrum is inverted, something is wrong with my brain, or something is wrong with the world.
That isn’t a reductive explanaiton, becuase no attempt is made to show how Mary;s red quale breaks down into smaller component parts. In fact, it doens;t do much more than say subjectivity exists, and occurs in sync with brain states. As such, it is compatible with dualism.
You mean p-zombie arguments?
Whatever,tThat doesn;t actuall provide an explanation of qualia.
I presume that would be “Mary’s qualia are caused by the feeling the color-processing pathways of her brain light up. The color processing parts are made of neurons, which are made of molecules, which are made of atoms. Those parts of the brain are then connected to another part of the brain by more neurons, which are similarly composed. When those color processing parts fire this causes the connecting neurons to fire in a certain pattern. These patterns of firings are what her feelings are made of. Feelings are made out of firing neurons, which are in turn made out of atoms.”
I don’t get the appeal of dualism. Qualia can’t run on machines made out of atoms and quarks, but there is some other mysterious substance that composes our mind, and qualia can run on machines made out of this substance? Why the extra step? Why not assume that atoms and quarks are the substrate that qualia run on? What hypothetical special properties does this substance have that let qualia run on it, but not on atoms?
I’m sure that if we ever did discover some sort of disembodied soul made out of a weird previously unknown substance that was attached to the brain and appeared to contain our consciousness, Dave Chalmers would argue that qualia couldn’t possibly be reduced down to something as basic as [newly discovered substance], and that obviously this disembodied soul couldn’t possibly contain consciousness, that has to be contained somewhere else. There is no possible substance, no possible anything, that could ever satisfy the dualist’s intuitions.
Yes, plus the inverted spectrum argument, and all the other “conceivability arguments.” I can conceive of myself walking on walls, bench-pressing semi-trucks, and flying without making any modifications to my body or changing the external world. But that’s because my brain is bad at conceiving stuff and fudges using shortcuts. If I actually start thinking in extremely detailed terms of my muscle tissues and the laws of physics, it becomes obvious that you can’t conceive of such a thing.
If anyone argued “I can imagine an anorexic person with almost no muscles lifting a truck, therefore strength cannot be caused by one’s muscles,” they would be laughed at. P-zombies and inverted spectrums deserve similar ridicule.
A claim that some X is made of some Y is not showing how X’s are made of Y’s. Can you explain why red is produced and not soemthing other.
I wasn’t selling dualism, was noting that ESR’s account is not particualrly phsycialist as well as being not particularly explanatory,
I find the Mary argument more convincing.
There are many different neuron firing patterns. Some produce various shades of red, other produce other stuff.
The intuition that Mary’s Room activates is that no amount of book-learning can substitute for firsthand experience. This is because we can’t always use knowledge we obtain from reading about experiences to activate the same neurons that having those experiences would activate. The only way to activate them and experience those feeling is to have the activating experience.
Now, in Dennett’s RoboMary variation of the experience, RoboMary would probably not say “Wow!” That is because she is capable of constructing a brain emulator of herself seeing red inside her own head, and then transferring the knowledge of what those neurons (or circuits in this case) felt when activated. She already knows what seeing red feels like, even though she’s never seen it.
The dualist says: ‘I imagine Mary the color-blind learning all the scientific facts about color vision, including the fine neurological details, and correctly drawing any relevant inferences from these facts. Yet when I imagine Mary seeing red for herself for the first time, it seems to me that she would think that further epistemically open possibilities have been ruled out, that were previously open. There seemed to be more than one candidate subjective character red-detecting brain states could add up to, and learning “oh, that’s what red feels like!” narrowed down the model further.’
Since Mary is color-omniscient, some explanation then is needed for why she would harbor this false belief, or for why she wouldn’t really think that the first-hand experience had further narrowed down the experiential possibilities for her.
Saying ‘she hadn’t instantiated the property X’ doesn’t explain why anyone has this intuition, because in nearly all cases it’s possible to understand and expect properties without instantiating them oneself. If Mary were a volcanologist, there wouldn’t be some factual information she’s missing by virtue of not having her brain instantiate all the properties of a volcano. What is it about certain mental properties that makes them relevantly different?
Mary isn’t really color omniscient. The thought experiment has the hidden false assumption that human beings can learn all types of knowledge by study alone. Since we all know this isn’t true, when we hear “Mary knows everything about color” our brain translate that into “Mary knows everything about color that one can learn by studying.” Our intuitions about whether she’ll say “Wow” or not are based on this translation.
jimrandoh makes a similar point:
In other words, Mary can’t figure out what qualia feel like because she is using the “linguistic” program and needs to use the “visual processing one.” It’s like trying to do a slideshow using Notepad instead of Powerpoint.
Because human brains are much more complicated than volcanoes. Humans are only capable of assimilating so much prepositional knowledge and we are severely limited in our ability to convert it into other types of knowledge. orthonormal makes this point when explaining qualia.
Now, you could make Mary a superhuman creature that can assimilate vast amounts of knowledge and control and restructure her brain any way she wants. But if this assumption is made explicit my intuition that she would say “Wow!” when she goes outside disappears. A superhuman creature like that probably could figure out how seeing red felt without ever seeing it.
That’s another major problem with Mary’s Room. It posits a superhuman creature, capable of feats of learning and knowledge no human can achieve, but downplays that fact so that our intuitions are still conditioned to act like Mary is human.
Should probably be “propositional”.
Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn’t change that fact. If you’re saying there are some physical facts it’s physically possible for any agent to be able to figure out scientifically in principle, then you’ll need to explain why.
But the intuition isn’t that Mary would acquire the ability to recognize red objects for the first time. It’s that she’d learn new facts about what redness feels like. Consider the Marianna variant:
“Like Mary, Marianna first (at t1) lives in a black and white environment. Contrary to Mary (at a later moment t2) she gets acquainted with colors by seeing arbitrarily colored objects (abstract paintings, red chairs, blue tables, etc. but no yellow bananas, no pictures of landscapes with a blue sky etc.). Marianna is therefore unable to relate the kinds of color experiences she now is acquainted with to what she already knew about them at t1. At t2, Marianna may wonder which of four slides (a red, a blue, a green and a yellow slide) appears to her in the color normal people experience when looking at the cloudless sky. At t2 Marianna knows, in a sense, what it is like to have experiences of red, blue, etc. But she still lacks the relevant items of knowledge about what other people experience: there is a clear sense in which she still may not know that the sky appears blue to normal perceivers, she may even have the false believe that it appears to normal perceivers like the red slide appears to her and thus believe, in a sense, that the sky appears red to normal perceivers. Only at t3, when Marianna is finally released and sees the sky, does she gain this item of knowledge.”
Everything about volcanoes can be translated into a linguistic program, without information loss. Why can’t everything about visual processing by translated into a linguistic program without loss? If it’s merely a matter of qualia being complicated, then shouldn’t all other complicated systems yield relevantly identical Hard Problem intuitions? E.g., shouldn’t the planet Mars appear irreducible and ineffable and unphysical?
My intuition is that making Mary superhuman doesn’t change that experiencing red seems to narrow down the possibilities for her. Analogously, a superhuman wouldn’t be able to scientifically narrow down the possibilities for what it’s like to be a bat to a single model, without generating bat-experiences of its own. Can you explain why this intuition persists for me, when (as far as I can tell) it doesn’t for any other complex system?
Maybe, but in that case the challenge is to explain, at least schematically, what superhuman power Mary obtains that lets her solve the Hard Problem. Mere increased processing power alone doesn’t seem to dissolve the problem.
It does if our inability to simulate her well messes with our intuitions. If, as I conjectured, we tend to translate “omniscient person” with “scholar with lots of book-learning” then our intuitions will reflect that, and will hence be wrong.
Is Marianna omniscient about light and neuroscience like Mary? If she is, she’d be able to figure out which color is which fairly easily.
It’s not just a matter of qualia being complicated, it’s a matter of the human brain being bad at communicating certain things, of which qualia are only one thing of many. And this isn’t just an issue of processing power and the complexity of something being processed, it’s an issue of software problems. There are certain problems we have trouble processing regardless of what level of power we have, because of our mind’s internal architecture. Wei Dei puts it well when he says:
Furthermore, there are in fact other things that humans have a lot of difficulty communicating besides qualia. For instance, it’s common knowledge that people with a few days of job experience are much better at doing jobs than people who have spent months reading about the job.
I disagree. If Mary was a superhuman she could study what functions of the brain cause us to experience “qualia,” and then study the memories these processes generated. She could then generate such memories in her own brain, giving her the knowledge of what qualia feel like without ever experiencing them. She would see red and not be surprised at all.
If qualia were not a physical part of the brain, duplicating the memories of someone who had experienced them would not have this effect. However, I think it very likely that doing so would have this effect.
Because, as I said before, our emotions are “black boxes” that humans are very bad at understanding and explaining. Their Kolmogorov complexity is extraordinarily high, but we feel like they are simple because of our familiarity with them.
I think the ability to study and modify her own source code and memory, as well as the source code and memory of others is probably all she’d need, but I could be wrong.
You… disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I’m misunderstanding or misrepresenting my own intuitions?
I’m reporting on psychological data about what my intuitions are indicating to me. I’m not a dualist, so I’m not (yet) making any assertions about what Mary would actually do or say or know. I’m explaining what output my simulator is giving me when I run the thought experiment.
You’re assuming that all superhumans intelligent enough to understand the biophysics of color vision will also necessarily have a module that allows them to self-modify in a way that they have whatever first-person subjective experience they wish. There’s no reason to assume that. As long as a Mary without this capacity (but with the third-person biophysics-comprehending capacity) is possible, the argument goes through. The fact that a Mary that can spontaneously generate its own experience of redness is also possible doesn’t make any progress toward refuting or dissolving the Mary hunch.
It sounds to me like you’ve been reading too much Dennett. Dennett is not a careful or patient dissector of the Hard Problem. The entire RoboMary paper, for instance, is a non sequitur in relation to the arguments it’s meant to refute. It’s fun and interesting, but it’s talking about a different subject matter.
That’s not true at all. Most forms of dualism allow Mary to generate the relevant mental states by manipulating the physical states they are causally tied to.
This doesn’t look to me like an explanation yet, even an outline of one. In fact, it looks like an appeal to the Black Box black box: ‘Black box’ is being used as a special word meant to pick out some uniquely important and effective category of Unknown Thingie. But just saying ‘we don’t understand emotions yet’ doesn’t tell me anything about why emotions appear irreducible to me, while other things I don’t understand do seem reducible to me.
I don’t feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations ‘qualia seem irreducible because we don’t know enough about them yet’ and ‘qualia seem irreducible because we don’t realize how complicated they are’. But neither of these explanations makes me any less confused, and they’re both incredibly vague. I think this is a legitimate place to insist that we say not “complexity”.
Why, specifically, would any of those four abilities help? Are all four needed? Are some more important than others? Why, for instance, wouldn’t just studying my own source code and memory (without being able to do radical surgery on it) suffice for knowing the phenomenal character of redness, or the phenomenal character of a bat’s echolocation...?
I mean my intuition is different.
Alright, I’ll try to stop hopping and nail down what I’m saying:
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
In addition to making qualia seem irreducible, this phenomenon explains other things, such as the fact that many activities are easier to learn to do by experience.
I’ve never actually read any Denett, except for short summaries of some of his criticisms written by other people. One person who has influenced me a lot is Thomas Sowell, who frequently argues that the most important knowledge is implicit and extremely difficult, if not impossible, to articulate into verbal form. He does this in terms of economics, but when I started reading about the ineffability of qualia I immediately began to think “This probably has a similar explanation.”
Mary isn’t a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.
If you’re asserting that Mary does not have the software problem that makes it impossible to derive “experential knowledge” from verbal data, then the answer to the puzzle is “Yes, Mary does know what red looks like, and won’t be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren’t capable of imagining that kind of architecture.”
Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn’t mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn’t have access to the OS API. “Intelligence” of the program isn’t a factor (this is how we’re able to run high-speed javascript in browsers without every JS program being a severe security risk).
If this is the case then, as I said before, my intuition that she would not understand qualia disappears.
For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.
‘Ex hypothesi’ here means ‘by stipulation’ or ‘by the terms of the conditional argument’. The assumption is ‘Mary is a color scientist who knows all the relevant facts about color vision, but has never experienced color in her own visual field’. You aren’t denying that this is a consistent, coherent hypothetical. All you’re suggesting is that a being that satisfied this hypothetical would have transhuman or posthuman capacities for data storage and manipulation. So far so good.
You then insist that such a being, if it acquired color vision, would be completely unsurprised by the particular shade of red it now (for the first time) encounters; whereas the dualist insists in that situation the transhuman would learn a new fact, would acquire new, possibilities-ruling-out information. (A sentient supercomputer without the capacity to experience color would run into the exact same trouble.)
Up to that point, the two of you remain in a stalemate. (Or worse than a stalemate, from your perspective, since you find it baffling that anyone could share the dualist’s intuitions or reasoning, whereas the dualist perfectly well understands the intuitive force of your argument, and just doesn’t think it’s strong enough.)
So you assert. The goal here isn’t to just repeatedly assert, in various permutations, that dualists are wrong. The goal is to figure out why they think as they do, so we can dissolve the question. Swap out ‘free will’ for ‘irreducible qualia’ in Eliezer’s recommendation:
“It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. [...]
“The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers [...] But once you understand in detail how your brain generates the feeling of the question [...] then you’re done. Then there’s no lingering feeling of confusion, no vague sense of dissatisfaction.
“If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind.
“A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
“You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you’ve left anything unexplained.
’And so, perhaps, you’ll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.”
If you keep rushing again and again to swiftly solve the problem—or, worse, rushing again to affirm that the problem is solved—then it will be harder to notice the points that cause you confusion. My appeal to the Marianna example is a key example of a place that should have made you stop, furrow your brow, and notice that the explanation you gave before to dispell Mary-intuitions doesn’t work for Marianna-intuitions, even though the two seem to be of the same kind. It would be surprising indeed if ‘Mary lacked the ability to visualize redness’ were a big part of the explanation in the former case, yet not in the least bit a part of the latter case, given their obvious parallelism. This suggests that the explanation you first gave is off-base in the Mary case too. Retreating to just asserting that dualism is wrong is missing the important tidal shift that just happened.
OK. But ‘Qualia seem irreducible because something about how our brains work makes them seem irreducible’ isn’t the most satisfying of explanations. Could you give a little more detail?
OK. But couldn’t all of the same be said of ordinary macroscopic objects in our environment, too? When I see a table (a physical table in my environment—not a table-shaped quale in my visual field), I can’t directly access the underlying fine-grained quantum description of the table. Nor can I make tables spontaneously appear in my environment by acquiring superhuman knowledge of the physics of tables. Yet tables don’t seem to pose any problem at all for reductionism.
If tables and qualia have all these things in common, then where does the actual difference lie, the difference that explains why there seems to be a Hard Problem in one case and not in the other?
But is that because people who only learn about jobs indirectly are lacking certain key pieces of factual knowledge? The problem raised by Mary’s Room isn’t ‘Explain why Mary intuitively seems to get better at completing various tasks’; it’s ‘Explain why Mary intuitively seems to learn new factual knowledge’. This is made clearer by the Marianna example. Your analogy only helps us give a physicalistic explanation of the former, not the latter.
Mary is a super-sceintist in tersm of intelligence and memory, but doesn’t have special abilities to rewire her own cortex. Internally gerneating Red is a cheat, like pricking her thumb to observe the blood.
She isn’t generating Red, she’s generating a memory of the feeling Red generates without generating Red. She now knows what emotional state Red would make her feel, but hasn’t actually made herself see red. So when she goes outside she doesn’t say “Wow” she says “Oh, those feelings again, just as I suspected.”
Why is she generating a memory? How is she generatign a memory?
So she’s bound and gagged, with no ability to use her knowledge? Seems implausible, but OK. (Did she get this knowledge by dictation, or by magically reaching out to the Aristotelian essences of neurons?)
In any case, at least two of us have linked to orthonormal’s mini-sequence on the matter. Those three posts seem much better than ESR’s attempt at the quest.
If by “using her knowledge” you mean performing neurosurgery in herself, I have to repeat that that is a cheat.Otherwise, I ha e to point put that knowledge of, eg. phontosynthesis, doesn’t cause photosynthesis.
Sure, that would be this mini-sequence by orthonormal.