What do you expect reductionism to do? We already know what inputs correspond to what qualia, so that cannot be the question. We know that the process very probably involves the brain, because that’s where light goes in and talk of qualia comes out. I would add to that that it’s exceedingly unlikely that a tiny patch of dense, electrically active tissue would just happen to implement some basic physical law that is not found anywhere else in nature, and in any case this does not seem to me to be necessary.
I think the problem is that reductionist explanations, as they stand, lack the power to construct in our heads a model that bridges sensory, electric input and cognitive experiences. To which I say—would you expect it to? Cognition, reflection and perception are probably the parts of our minds most tightly coupled with the rest of the hardware. They’re the areas where intuition has the largest importance for efficient processing, and thus where a physical account adds the least relative value. Complicated design, existing hacky intuitive model—it makes sense that this is the last-understood part of our minds.
That said, I think you’re expecting too much from reductionism. How would you expect a working theory of qualia to look like? I think to say that a theory of qualia should allow us to cognitively simulate the subjective experience of a different cognitive model asks something of the theory that our minds cannot deliver. Our wetware has no reason to have this capability.
It’s like saying a physical simulation of a loudspeaker is necessarily incomplete unless it can make sound come out of your CPU.
I consider reductive explanation to be a form of explanation that offers a persuasive association of concepts together with the ability to make quantitative and novel predictions. If someone doubts that there .is a reductive explanation of heat, you hand them a textbook of thermodynamics, and the matter is settled.
I consider reductionism to the handwaving, philosophical, claim that reductive explanation is the best explanation and/or that it will eventually succeed in all cases.
So a reductive explanation of qualia would be something controversial written up in a text book; whereas reductionism about qualia would be the controversial claim that a reductive explanation of qualia is possible.
“We don’t need new physics” is a typical handwaving claim for reductionism, which can be countered by the handwaving claim that we .do need new physics (eg Chalmers’ The Conscious Mind)
I don’t expect toasters to make coffee, and that isn’t a problem, because it is not their function. It isn’t obviously a problem that explanations don’t induce what they are explaining. The standard explanation of photosynthesis doesn’t make me photosynthesise,and is none the worse for it.Explanations are supposed to explain. That is their function. If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem, rather than a wild irrelevancy, that an explanation don’t induce what it is explaining.
I still don’t know why you think the induction ofqualia is important.
I don’t expect reductive explanations to deliver induction of what they explain. I do expect full explanations to fully explain. If Mary’s Aha! tells her nothing, she never had a full explanation...andd that isn’t due to lack of detail in the explanations or brainpower on her part, both of which are waived away in the story,
I don’t know what you your intuitions are, because you won’t tell me. However, I suspect that they may be inconsistent.
If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory.
I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
You seem to have persuaded yourself of that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experience of novel qualia that e everyone has had are empirical data that override theories.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
What do you expect reductionism to do? We already know what inputs correspond to what qualia, so that cannot be the question. We know that the process very probably involves the brain, because that’s where light goes in and talk of qualia comes out. I would add to that that it’s exceedingly unlikely that a tiny patch of dense, electrically active tissue would just happen to implement some basic physical law that is not found anywhere else in nature, and in any case this does not seem to me to be necessary.
I think the problem is that reductionist explanations, as they stand, lack the power to construct in our heads a model that bridges sensory, electric input and cognitive experiences. To which I say—would you expect it to? Cognition, reflection and perception are probably the parts of our minds most tightly coupled with the rest of the hardware. They’re the areas where intuition has the largest importance for efficient processing, and thus where a physical account adds the least relative value. Complicated design, existing hacky intuitive model—it makes sense that this is the last-understood part of our minds.
That said, I think you’re expecting too much from reductionism. How would you expect a working theory of qualia to look like? I think to say that a theory of qualia should allow us to cognitively simulate the subjective experience of a different cognitive model asks something of the theory that our minds cannot deliver. Our wetware has no reason to have this capability.
It’s like saying a physical simulation of a loudspeaker is necessarily incomplete unless it can make sound come out of your CPU.
I consider reductive explanation to be a form of explanation that offers a persuasive association of concepts together with the ability to make quantitative and novel predictions. If someone doubts that there .is a reductive explanation of heat, you hand them a textbook of thermodynamics, and the matter is settled.
I consider reductionism to the handwaving, philosophical, claim that reductive explanation is the best explanation and/or that it will eventually succeed in all cases.
So a reductive explanation of qualia would be something controversial written up in a text book; whereas reductionism about qualia would be the controversial claim that a reductive explanation of qualia is possible.
“We don’t need new physics” is a typical handwaving claim for reductionism, which can be countered by the handwaving claim that we .do need new physics (eg Chalmers’ The Conscious Mind)
I don’t expect toasters to make coffee, and that isn’t a problem, because it is not their function. It isn’t obviously a problem that explanations don’t induce what they are explaining. The standard explanation of photosynthesis doesn’t make me photosynthesise,and is none the worse for it.Explanations are supposed to explain. That is their function. If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem, rather than a wild irrelevancy, that an explanation don’t induce what it is explaining.
I still don’t know why you think the induction ofqualia is important.
I don’t expect reductive explanations to deliver induction of what they explain. I do expect full explanations to fully explain. If Mary’s Aha! tells her nothing, she never had a full explanation...andd that isn’t due to lack of detail in the explanations or brainpower on her part, both of which are waived away in the story,
I don’t know what you your intuitions are, because you won’t tell me. However, I suspect that they may be inconsistent.
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
But that’s just my intuition on the topic.
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
Qualia are information for us: if you have no visual qualia, you are blind, etc.