I’m not sure what sort of knowledge we are talking about, and I suspect that Nagel’s argument is based on an equivocation.
If we are talking about epistemic beliefs, expectations on observations, then we can certainly study the phenomenon of bat consciousness with a reductionist (that is, scientific) approach.
What we can’t do is to simulate the mental states of a bat using our innate agent-simulation mental machinery, for the same reason a color-blind person can’t simulate the mental state of a non-color-blind person perceiving red and green as different colors, or Mary can’t simulate perceiving colors. These experiences are a type of knowledge that can’t be obtained from scientific research, even though they aren’t (in principle) intrinsically epistemically meaningful: if you take Mary outside her back-and-white room, she will experience new mental states even though her epistemic beliefs essentially don’t change.
It seems to me that this issue stems just from a limitation of our mental machinery, not an intrinsic flaw of the scientific method or a non-physical nature of consciousness.
If everything can be explained in reductiomistic terms, then so can what it feels like to actually instantiate an experience. If actual experience can’t be explained reductionalistically, then reductionism fails.
I see no reason why Mary should not update on “reductionism can explain anything” After all, she has just had a directly contrary experience.
The question is what “explains” mean. Her experience shows that “reading about science in a grey room can not put your brain into arbitrary states” (e.g. the states that arise from coloured rooms). So there are some things it can not make you “know”, if by know you mean “empathatize with” (== “simulate with innate agent simulation machinery”). But that’s not really surprising—it mostly sounds interesting because of equivocation with other senses of “know”.
Daniel Dennet’s analysis of Mary basically works by pointing out this equivocation. He notes that the kind of knowledge that Mary does have is enough to do impressive things such as recognizing a prank if she is handed a blue banana. So the thing she ‘learns’ is less dramatic than what the phrasing of the story (“now I finally know what it is like to experience colour!”) would lead you to expect.
If physical reductionsm explains everything, then it explains what it is like to be in some brain state.
If you assume that instantiating a brain state is necessary to fully understanding it, you have conceded the point of Nagel and other qualiaphiles.
It is not surprising—it is intuitive—that there is something special about instantiation. However, that is not compatible with strong reductionist intuitions. That is what makes Nagels argument interesting.
If you are willing to adopt how-was-it-for-me behaviourism, there is nothing special about Mary. But most find that position intuitive.
So let me preface this by saying that I don’t have any world-shattering insights about this, I think my view is the bog-standard naive physicalism. But it seems to me that naive physicalism can handle these issues.
I think the concepts “explain” and “understand” cause confusion here, so lets switch to the hopefully easier “is experiencing” and “has ever experienced”. So suppose that for any given subjective sensation, say redness, there is some class of brain states such that an observer is experiencing that sensation iff their brain-state is in the class. We can figure out what the class is e.g. by asking people what they are experiencing while we carefully examine their brain. Then, to (“fully”) experience a sensation, you need to instantiate a given brain state. Does this mean that there is something special about instantiation? I mean, yes there is—but in a quite trivial sense, which seems compatible with reductionist intuitions.
I am not sure what is supposed to be trivial or special here. If you belIevethat there is something extra about instantiating a brain state, such that you have to instantiate it to fully understandi it ,you have already rejected the strongest form of reductionism, because strong reductionism claims to explain everything reductionalistically, ie by presenting some sufficiently complicated formula.
You can .argue that you only need to put labels on brain states, and thereby explain Red as the brain state with a certain label. That would be ontologocally neutral and therefore not commit you to non-physicalist ontology . However, since it is non commital, it doesn’t imply physicalism either, since you could equally be labelling subjective states with brain scans. You have decided to go one way and not the other,but that is your decision.
Taking correlation to be causation is one thing : taking it to be identity is another.
You keep using the word “understand” without defining it. I thought V_V’s original comment was good because it points out that there are two plausible meanings that could be in play. If you want “predict observations”, then nothing is missing. If you want “simulate using innate simulation machinery”, then indeed you need something extra, but it’s not a better metaphysics, it’s a JTAG port to your brain.
Why would I even want to simulate a state in my own brain, unless it brings some increase in knowledge?
You can rescue reductionism by maintaining that you get nothing from instantiating the state yourself, that Mary has no “aha!”
You can also rescue it by maintaining that it predicts the subjective state that brings about the aha.
But you can’t maintain that there is some point to personal instantiation, which is not impactive on reductionism. If it’s s thing and it’s not predicted by reductionism, then there is a thing reductionism can’t explain in it own terms.
Reductionism can explain it—ie. causally justify and predict its appearance and behavior. But reductionism cannot explain it to people—ie. induce in their head the appropriate pattern by vocal communication. Those are different meanings of the same word. Reductionism can causally map any bat-brain output to its inputs and state, but it cannot be used as an argumentive tool to induce in listeners an analogue of the bat’s mindstate.
Reductionism can’t explain it. It can’t predict novel experience, eg what it is like to be bat on L.SD. Reductionism also can’t predict experience by applying consistent laws. You can match off known brain states to known subjective experiences in a kind of dictionary or database, but that is not what we normally mean by EXPLANATION.
The idea that you cannot have an explanation that you cannot explain to anybody is problematical. How is that different to not having an explanation? (Owen Flanagan suggests that scientists could test a predictive theory of qualia again their own experience. But again, that abandons strong physicalism, since it accepts the existence of irreducible subjectivity)
Reductionism can’t explain it. It can’t predict novel experience, eg what it is like to be bat on L.SD.
It doesn’t predict novel experience; in other words, it predicts I will not suddenly feel like a bat on LSD. This is correct, since I won’t, not being a bat.
You can match off known brain states to known subjective experiences in a kind of dictionary or database, but that is not what we normally mean by EXPLANATION.
What I mean by explanation is “provide a causal model that predicts future behavior”. What do you mean by explanation?
The idea that you cannot have an explanation that you cannot explain to anybody is problematical. How is that different to not having an explanation?
I have no idea what you’re saying here. Did you mean “can have an explanation that you cannot explain to anybody”—and that’s not what I said. I said reductionism cannot be used as a tool to induce in people’s minds patterns analogous to a bat’s minds. Let’s taboo the word “explanation” here—say what you mean.
It is no advertisement for a reductionistic theory of qualia that it can’t ever make novel predictions, since other reductionistic theories can predict novel phenomena.
It is no advertisement for a reductionstic theory of qualia that it doesn’t predict novel experiences in someone taking LSD, since, empirically, such experiences are reported.
Saying that something won’t happen is not much of a prediction, or I can predict tomorrow’s weather by saying there won’t be a tornado.
Reductive explantation show how higher level properties and behaviours arise from lower level properties and behaviours. Stating that they arise is not showing how. Explanations answer how questions. By relating concepts. So as to increase understanding in the reader.
We have well known examples of reductive explanations, such as the reduction of heat to molecular motion.
They are conceptual. They are not look up tables that match off one property against another. Nor are they causal models, in the sense of directed graphs, since a directed graph has no conceptual content.
We don’t know how qualia, the HL properties in question, relate to their bases. If we did you could reverse the reduction, and construct code or electronics that could generate qualia. Which we can’t, at all, Although we can build memory and cognition, and even, absent qualia, perception.
You still haven’t said why you think instantiation is important.
If we had explanations that outputted some formula or sentence that told us what experience was like, we would not need personal instantiation … Mary would not say aha! If such an explanation exists, I would like to see it.
If out putative explanations don’t tell us what the experiences are like, then instantiation would be necessary to know what they are like,...and you wouldn’t have an explanation of qualia, because qualia are what experiences feel like.
What do you expect reductionism to do? We already know what inputs correspond to what qualia, so that cannot be the question. We know that the process very probably involves the brain, because that’s where light goes in and talk of qualia comes out. I would add to that that it’s exceedingly unlikely that a tiny patch of dense, electrically active tissue would just happen to implement some basic physical law that is not found anywhere else in nature, and in any case this does not seem to me to be necessary.
I think the problem is that reductionist explanations, as they stand, lack the power to construct in our heads a model that bridges sensory, electric input and cognitive experiences. To which I say—would you expect it to? Cognition, reflection and perception are probably the parts of our minds most tightly coupled with the rest of the hardware. They’re the areas where intuition has the largest importance for efficient processing, and thus where a physical account adds the least relative value. Complicated design, existing hacky intuitive model—it makes sense that this is the last-understood part of our minds.
That said, I think you’re expecting too much from reductionism. How would you expect a working theory of qualia to look like? I think to say that a theory of qualia should allow us to cognitively simulate the subjective experience of a different cognitive model asks something of the theory that our minds cannot deliver. Our wetware has no reason to have this capability.
It’s like saying a physical simulation of a loudspeaker is necessarily incomplete unless it can make sound come out of your CPU.
I consider reductive explanation to be a form of explanation that offers a persuasive association of concepts together with the ability to make quantitative and novel predictions. If someone doubts that there .is a reductive explanation of heat, you hand them a textbook of thermodynamics, and the matter is settled.
I consider reductionism to the handwaving, philosophical, claim that reductive explanation is the best explanation and/or that it will eventually succeed in all cases.
So a reductive explanation of qualia would be something controversial written up in a text book; whereas reductionism about qualia would be the controversial claim that a reductive explanation of qualia is possible.
“We don’t need new physics” is a typical handwaving claim for reductionism, which can be countered by the handwaving claim that we .do need new physics (eg Chalmers’ The Conscious Mind)
I don’t expect toasters to make coffee, and that isn’t a problem, because it is not their function. It isn’t obviously a problem that explanations don’t induce what they are explaining. The standard explanation of photosynthesis doesn’t make me photosynthesise,and is none the worse for it.Explanations are supposed to explain. That is their function. If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem, rather than a wild irrelevancy, that an explanation don’t induce what it is explaining.
I still don’t know why you think the induction ofqualia is important.
I don’t expect reductive explanations to deliver induction of what they explain. I do expect full explanations to fully explain. If Mary’s Aha! tells her nothing, she never had a full explanation...andd that isn’t due to lack of detail in the explanations or brainpower on her part, both of which are waived away in the story,
I don’t know what you your intuitions are, because you won’t tell me. However, I suspect that they may be inconsistent.
If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory.
I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
You seem to have persuaded yourself of that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experience of novel qualia that e everyone has had are empirical data that override theories.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
I’m not sure what sort of knowledge we are talking about, and I suspect that Nagel’s argument is based on an equivocation.
If we are talking about epistemic beliefs, expectations on observations, then we can certainly study the phenomenon of bat consciousness with a reductionist (that is, scientific) approach.
What we can’t do is to simulate the mental states of a bat using our innate agent-simulation mental machinery, for the same reason a color-blind person can’t simulate the mental state of a non-color-blind person perceiving red and green as different colors, or Mary can’t simulate perceiving colors.
These experiences are a type of knowledge that can’t be obtained from scientific research, even though they aren’t (in principle) intrinsically epistemically meaningful: if you take Mary outside her back-and-white room, she will experience new mental states even though her epistemic beliefs essentially don’t change.
It seems to me that this issue stems just from a limitation of our mental machinery, not an intrinsic flaw of the scientific method or a non-physical nature of consciousness.
If everything can be explained in reductiomistic terms, then so can what it feels like to actually instantiate an experience. If actual experience can’t be explained reductionalistically, then reductionism fails.
I see no reason why Mary should not update on “reductionism can explain anything” After all, she has just had a directly contrary experience.
The question is what “explains” mean. Her experience shows that “reading about science in a grey room can not put your brain into arbitrary states” (e.g. the states that arise from coloured rooms). So there are some things it can not make you “know”, if by know you mean “empathatize with” (== “simulate with innate agent simulation machinery”). But that’s not really surprising—it mostly sounds interesting because of equivocation with other senses of “know”.
Daniel Dennet’s analysis of Mary basically works by pointing out this equivocation. He notes that the kind of knowledge that Mary does have is enough to do impressive things such as recognizing a prank if she is handed a blue banana. So the thing she ‘learns’ is less dramatic than what the phrasing of the story (“now I finally know what it is like to experience colour!”) would lead you to expect.
If physical reductionsm explains everything, then it explains what it is like to be in some brain state. If you assume that instantiating a brain state is necessary to fully understanding it, you have conceded the point of Nagel and other qualiaphiles. It is not surprising—it is intuitive—that there is something special about instantiation. However, that is not compatible with strong reductionist intuitions. That is what makes Nagels argument interesting.
If you are willing to adopt how-was-it-for-me behaviourism, there is nothing special about Mary. But most find that position intuitive.
So let me preface this by saying that I don’t have any world-shattering insights about this, I think my view is the bog-standard naive physicalism. But it seems to me that naive physicalism can handle these issues.
I think the concepts “explain” and “understand” cause confusion here, so lets switch to the hopefully easier “is experiencing” and “has ever experienced”. So suppose that for any given subjective sensation, say redness, there is some class of brain states such that an observer is experiencing that sensation iff their brain-state is in the class. We can figure out what the class is e.g. by asking people what they are experiencing while we carefully examine their brain. Then, to (“fully”) experience a sensation, you need to instantiate a given brain state. Does this mean that there is something special about instantiation? I mean, yes there is—but in a quite trivial sense, which seems compatible with reductionist intuitions.
I am not sure what is supposed to be trivial or special here. If you belIevethat there is something extra about instantiating a brain state, such that you have to instantiate it to fully understandi it ,you have already rejected the strongest form of reductionism, because strong reductionism claims to explain everything reductionalistically, ie by presenting some sufficiently complicated formula.
You can .argue that you only need to put labels on brain states, and thereby explain Red as the brain state with a certain label. That would be ontologocally neutral and therefore not commit you to non-physicalist ontology . However, since it is non commital, it doesn’t imply physicalism either, since you could equally be labelling subjective states with brain scans. You have decided to go one way and not the other,but that is your decision.
Taking correlation to be causation is one thing : taking it to be identity is another.
You keep using the word “understand” without defining it. I thought V_V’s original comment was good because it points out that there are two plausible meanings that could be in play. If you want “predict observations”, then nothing is missing. If you want “simulate using innate simulation machinery”, then indeed you need something extra, but it’s not a better metaphysics, it’s a JTAG port to your brain.
Why would I even want to simulate a state in my own brain, unless it brings some increase in knowledge?
You can rescue reductionism by maintaining that you get nothing from instantiating the state yourself, that Mary has no “aha!”
You can also rescue it by maintaining that it predicts the subjective state that brings about the aha.
But you can’t maintain that there is some point to personal instantiation, which is not impactive on reductionism. If it’s s thing and it’s not predicted by reductionism, then there is a thing reductionism can’t explain in it own terms.
Reductionism can explain it—ie. causally justify and predict its appearance and behavior. But reductionism cannot explain it to people—ie. induce in their head the appropriate pattern by vocal communication. Those are different meanings of the same word. Reductionism can causally map any bat-brain output to its inputs and state, but it cannot be used as an argumentive tool to induce in listeners an analogue of the bat’s mindstate.
When a tree falls in a forest...
Reductionism can’t explain it. It can’t predict novel experience, eg what it is like to be bat on L.SD. Reductionism also can’t predict experience by applying consistent laws. You can match off known brain states to known subjective experiences in a kind of dictionary or database, but that is not what we normally mean by EXPLANATION.
The idea that you cannot have an explanation that you cannot explain to anybody is problematical. How is that different to not having an explanation? (Owen Flanagan suggests that scientists could test a predictive theory of qualia again their own experience. But again, that abandons strong physicalism, since it accepts the existence of irreducible subjectivity)
It doesn’t predict novel experience; in other words, it predicts I will not suddenly feel like a bat on LSD. This is correct, since I won’t, not being a bat.
What I mean by explanation is “provide a causal model that predicts future behavior”. What do you mean by explanation?
I have no idea what you’re saying here. Did you mean “can have an explanation that you cannot explain to anybody”—and that’s not what I said. I said reductionism cannot be used as a tool to induce in people’s minds patterns analogous to a bat’s minds. Let’s taboo the word “explanation” here—say what you mean.
It is no advertisement for a reductionistic theory of qualia that it can’t ever make novel predictions, since other reductionistic theories can predict novel phenomena.
It is no advertisement for a reductionstic theory of qualia that it doesn’t predict novel experiences in someone taking LSD, since, empirically, such experiences are reported.
Saying that something won’t happen is not much of a prediction, or I can predict tomorrow’s weather by saying there won’t be a tornado.
Reductive explantation show how higher level properties and behaviours arise from lower level properties and behaviours. Stating that they arise is not showing how. Explanations answer how questions. By relating concepts. So as to increase understanding in the reader.
We have well known examples of reductive explanations, such as the reduction of heat to molecular motion. They are conceptual. They are not look up tables that match off one property against another. Nor are they causal models, in the sense of directed graphs, since a directed graph has no conceptual content.
We don’t know how qualia, the HL properties in question, relate to their bases. If we did you could reverse the reduction, and construct code or electronics that could generate qualia. Which we can’t, at all, Although we can build memory and cognition, and even, absent qualia, perception.
You still haven’t said why you think instantiation is important.
If we had explanations that outputted some formula or sentence that told us what experience was like, we would not need personal instantiation … Mary would not say aha! If such an explanation exists, I would like to see it.
If out putative explanations don’t tell us what the experiences are like, then instantiation would be necessary to know what they are like,...and you wouldn’t have an explanation of qualia, because qualia are what experiences feel like.
What do you expect reductionism to do? We already know what inputs correspond to what qualia, so that cannot be the question. We know that the process very probably involves the brain, because that’s where light goes in and talk of qualia comes out. I would add to that that it’s exceedingly unlikely that a tiny patch of dense, electrically active tissue would just happen to implement some basic physical law that is not found anywhere else in nature, and in any case this does not seem to me to be necessary.
I think the problem is that reductionist explanations, as they stand, lack the power to construct in our heads a model that bridges sensory, electric input and cognitive experiences. To which I say—would you expect it to? Cognition, reflection and perception are probably the parts of our minds most tightly coupled with the rest of the hardware. They’re the areas where intuition has the largest importance for efficient processing, and thus where a physical account adds the least relative value. Complicated design, existing hacky intuitive model—it makes sense that this is the last-understood part of our minds.
That said, I think you’re expecting too much from reductionism. How would you expect a working theory of qualia to look like? I think to say that a theory of qualia should allow us to cognitively simulate the subjective experience of a different cognitive model asks something of the theory that our minds cannot deliver. Our wetware has no reason to have this capability.
It’s like saying a physical simulation of a loudspeaker is necessarily incomplete unless it can make sound come out of your CPU.
I consider reductive explanation to be a form of explanation that offers a persuasive association of concepts together with the ability to make quantitative and novel predictions. If someone doubts that there .is a reductive explanation of heat, you hand them a textbook of thermodynamics, and the matter is settled.
I consider reductionism to the handwaving, philosophical, claim that reductive explanation is the best explanation and/or that it will eventually succeed in all cases.
So a reductive explanation of qualia would be something controversial written up in a text book; whereas reductionism about qualia would be the controversial claim that a reductive explanation of qualia is possible.
“We don’t need new physics” is a typical handwaving claim for reductionism, which can be countered by the handwaving claim that we .do need new physics (eg Chalmers’ The Conscious Mind)
I don’t expect toasters to make coffee, and that isn’t a problem, because it is not their function. It isn’t obviously a problem that explanations don’t induce what they are explaining. The standard explanation of photosynthesis doesn’t make me photosynthesise,and is none the worse for it.Explanations are supposed to explain. That is their function. If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem, rather than a wild irrelevancy, that an explanation don’t induce what it is explaining.
I still don’t know why you think the induction ofqualia is important.
I don’t expect reductive explanations to deliver induction of what they explain. I do expect full explanations to fully explain. If Mary’s Aha! tells her nothing, she never had a full explanation...andd that isn’t due to lack of detail in the explanations or brainpower on her part, both of which are waived away in the story,
I don’t know what you your intuitions are, because you won’t tell me. However, I suspect that they may be inconsistent.
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
But that’s just my intuition on the topic.
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
Qualia are information for us: if you have no visual qualia, you are blind, etc.
?
Edited