If there is something you can learn by actually having a quale that you didn’t learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory.
I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
You seem to have persuaded yourself of that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experience of novel qualia that e everyone has had are empirical data that override theories.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
Darn. I had a huge post written up about my theory of how qualia work, and how you’d build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there’s no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it’d be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).
But this argument goes both ways. If I can’t quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat’s mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn’t that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?
The necessary absence of both success and failure hints at incoherence in the question.
I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn’t tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it—thus the “feeling that there’s something missing”—but you can’t communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people “truly perceive red”—mildly silly. The qualia were never what it was about—red isn’t about red, it’s about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition—We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people’s. (I mean, when was that ever gonna become relevant?)
But that’s just my intuition on the topic.
It’s important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven’t tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn’t conveyed by any description, however good.
You seem to have persuaded yourself that qualia don’t contain information on the basis of an untested theory. I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.
We actually know quite a lot about qualia as a result of having them...it just isn’t reductionistic knowledge.
You also seem have reinvented the idea I call computational zombies. It’s phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn’t prove qualia are causal idle.
It’s also important to distinguish functional idleness and causal idleness. The computational zombie won’t have a box labelled “qualia” in its functional design. But functional designs don’t do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they’re doing the driving along with [the rest of] it.
I don’t know what question you think is rendered incoherent.
There may be naturallistic reasons to believe we shouldn’t be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.
So what information does a new taste contain? What it’s similar to, what it’s dissimilar to, how to compare it, what it reminds you of, what foods trigger it—but those are all information that can be known; it doesn’t need to be experienced. So what information does the pure quale contain?
If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.
Yeah but at that point you have to ask, what makes you think qualia, or rather, “the choice of qualia as supposed to a different representation” is a kind of information about the world? It seems to me more plausible that it’s just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don’t reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?
As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human’s mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says “yes” and “no”.)
[edit] My intuition wishes to update the first response to “maybe, idk”.
Qualia are information for us: if you have no visual qualia, you are blind, etc.