I was actually starting another article that presents a solution (well, a research program) for qualia. [1] The idea is this:
The concept of qualia becomes mysterious when we have a situation in which sensory data (edit: actually, cognition of the sensory data) is incommensurable (not comparable) between beings. So the key question is, when would this situation arise?
If you have two identical robots with idential protocols, you have no qualia problem. They can directly exchange their experiences and leave no question about whether “my red” is “your red”.
But here’s the kicker: imagine if the robots don’t use identical protocols. Imagine that they instead simply use themselves to collect and retain as much information about their experiences as physically possible. They optimize “amount I remember”.
In that case, they will use every possible trick to make efficient use of what they have, no longer limited by the protocols. So they will eventually use “encoding schemes” for which there is no external rulebook; the encoding is implicitly “decompressed” by their overall functionality. They have not left a “paper trail” that someone else can use and make sense of (without significant reverse engineering effort).
In that case, you can no longer directly port one’s experience over into the other’s. To each other, the encoding looks like meaningless garbage. But if they’re still alive, they can still achieve some level of commensurability. They can look at the same uniform surface and ask each other, “how does your photo-modality respond to this thingamajig?” [2] They can then synchronize internal experiences across each other and have a common conception of “red”, even as it still may differ from what exactly the other robot is doing internally upon receiving red-data.
(And they can further constrain the environment to make sure they are talking about the same thing if e.g. one robot has tightly-coupled sensory cognition in which sensation of color varies with acoustics of the environment.)
This, I claim, is the status of humans with respect to each other: We have very similar general “body plans” but also use a no-holds-barred, standards-free method for creating (encoding) memories that puts up a severe—but partially circumventable—barrier to comparing internal experiences.
(Oh, and since you guys are probably still wondering: even I wouldn’t fault you for failing to explain color a blind man. The best I would expect is that you can say, “Alright, you know how smelling is different from hearing? Well, seeing is as different from both of those as they are from each other.”)
[1] Yes, I start a lot of articles but don’t finish them … have about three times as many in progress as I have posted.
[2] Remember: even though they have different internal experiences, they can still tell that a particular observation depends on a particular sensor by turning it on and off, and thus meaningfully talking about how their cognition relates to a particular sensor.
Before finishing (or perhaps as a sequel?) you should make sure to watch Cristof’s Koch’s “neural correlates of consciousness” talk. He’s been giving variations on this talk for something like 10 years that I know of and its pretty polished. Its gotten better over the years and the speaker is the source of my current working definition of consciousness (quoted below). Which is not about language I/O and compression but about internal experiences themselves and what systems implement them.
The core insight is that you can show someone a visual trick (like the faces or goblet image) and you can go back and forth “seeing different interpretations”. When you’re in one or the other state “internal state” this is you having different kinds of “qualia”, and presumably these distinct perceptual states have “biological correlates”.
Manipulation of these internal mental states and study of the associated physical systems become the “object of study” in order to crack the mind-body problem. Once you’ve got the neural level you can ask about high level issues like algorithms or ask about deeper mechanisms like neurotransmitters and genes and so on. Full understanding would imply that we could create mutant “zombie mice” and that they would have no qualia (of certain sorts) and be incapable of whatever behavior was “computed” in a way that involved (that sort of) qualia.
Ideally we would have a theory to predict and and explain such phenomenon and perhaps we’d be able to invent things like a pill that lets you “become a p-zombie” for an hour (though I suspect part of that would involve shutting down enough memory formation processes that you would not be able to remember the experience except via something external like videotape).
The Q&A has much more sophisticated objections/questions than you usually get on the subject of minds. The final question ends with Koch’s working theory which sounds about right to me. Quoting Koch when asked why he thinks bees are probably conscious and why he became vegetarian:
Rather than endless speculation there has to be some complex behavior, so forget about plants or even simple single celled organisms. If they do very simple stereotypical things I see no reason to ascribe consciousness to them. This may be wrong ultimately, but I think right now that’s my index for consciousness: reasonably complex, non-stereotypical behavior that involves online dynamic storage of information.
My summary above cuts to what I think is the core insight about focusing clearly and experimentally on exactly the elements of interest: consciousness and neurons. The talk itself is a summary of the main points of many papers with different points and some demonstrations of the experimental manipulations.
The talk itself (rather than the intros) starts about 4 minutes in. I recommend just watching it. Koch is a pretty good speaker and this is sort of his “dog and pony show” where he summarizes an entire research program in a way that’s been iteratively optimized for years. Your 60 minutes will not be wasted :-)
This seems to match up with an argument I made against writing oneself into the future, though I think your formulation is more general. I’m certainly quite interested in hearing more on the subject; I think you’re headed in a good direction.
Thank you much, and I definitely see the similarity with what you posted.
I may indeed be running into the problem of letting “good enough” become the enemy of “at all”. I’ll try to get these articles up in some presentable form soon.
The drafts you’ve been posting lately are interesting to me, and I’d like to see them fleshed out into top-level posts. I would also suggest adding external material and references to give more context to your thought experiments.
Thanks. But that’s one of my difficulties. I read a lot of stuff and so these ideas just “come together”. I don’t even know if there is a source that agrees with this idea. As it stands now, the only sources I believe I’d be able to cite are some of Gary Drescher’s discussion of qualia, and the information-theoretic basics of how compression works, and what makes it more or less effective.
Any suggestions (specific suggestions) for how to find the external references that would be relevant to this topic or the other one’s I’ve posted recently?
By the way: I started keeping a list of planned top-level articles on my Wiki page.
I was actually starting another article that presents a solution (well, a research program) for qualia. [1] The idea is this:
The concept of qualia becomes mysterious when we have a situation in which sensory data (edit: actually, cognition of the sensory data) is incommensurable (not comparable) between beings. So the key question is, when would this situation arise?
If you have two identical robots with idential protocols, you have no qualia problem. They can directly exchange their experiences and leave no question about whether “my red” is “your red”.
But here’s the kicker: imagine if the robots don’t use identical protocols. Imagine that they instead simply use themselves to collect and retain as much information about their experiences as physically possible. They optimize “amount I remember”.
In that case, they will use every possible trick to make efficient use of what they have, no longer limited by the protocols. So they will eventually use “encoding schemes” for which there is no external rulebook; the encoding is implicitly “decompressed” by their overall functionality. They have not left a “paper trail” that someone else can use and make sense of (without significant reverse engineering effort).
In that case, you can no longer directly port one’s experience over into the other’s. To each other, the encoding looks like meaningless garbage. But if they’re still alive, they can still achieve some level of commensurability. They can look at the same uniform surface and ask each other, “how does your photo-modality respond to this thingamajig?” [2] They can then synchronize internal experiences across each other and have a common conception of “red”, even as it still may differ from what exactly the other robot is doing internally upon receiving red-data.
(And they can further constrain the environment to make sure they are talking about the same thing if e.g. one robot has tightly-coupled sensory cognition in which sensation of color varies with acoustics of the environment.)
This, I claim, is the status of humans with respect to each other: We have very similar general “body plans” but also use a no-holds-barred, standards-free method for creating (encoding) memories that puts up a severe—but partially circumventable—barrier to comparing internal experiences.
(Oh, and since you guys are probably still wondering: even I wouldn’t fault you for failing to explain color a blind man. The best I would expect is that you can say, “Alright, you know how smelling is different from hearing? Well, seeing is as different from both of those as they are from each other.”)
[1] Yes, I start a lot of articles but don’t finish them … have about three times as many in progress as I have posted.
[2] Remember: even though they have different internal experiences, they can still tell that a particular observation depends on a particular sensor by turning it on and off, and thus meaningfully talking about how their cognition relates to a particular sensor.
Before finishing (or perhaps as a sequel?) you should make sure to watch Cristof’s Koch’s “neural correlates of consciousness” talk. He’s been giving variations on this talk for something like 10 years that I know of and its pretty polished. Its gotten better over the years and the speaker is the source of my current working definition of consciousness (quoted below). Which is not about language I/O and compression but about internal experiences themselves and what systems implement them.
The core insight is that you can show someone a visual trick (like the faces or goblet image) and you can go back and forth “seeing different interpretations”. When you’re in one or the other state “internal state” this is you having different kinds of “qualia”, and presumably these distinct perceptual states have “biological correlates”.
Manipulation of these internal mental states and study of the associated physical systems become the “object of study” in order to crack the mind-body problem. Once you’ve got the neural level you can ask about high level issues like algorithms or ask about deeper mechanisms like neurotransmitters and genes and so on. Full understanding would imply that we could create mutant “zombie mice” and that they would have no qualia (of certain sorts) and be incapable of whatever behavior was “computed” in a way that involved (that sort of) qualia.
Ideally we would have a theory to predict and and explain such phenomenon and perhaps we’d be able to invent things like a pill that lets you “become a p-zombie” for an hour (though I suspect part of that would involve shutting down enough memory formation processes that you would not be able to remember the experience except via something external like videotape).
The Q&A has much more sophisticated objections/questions than you usually get on the subject of minds. The final question ends with Koch’s working theory which sounds about right to me. Quoting Koch when asked why he thinks bees are probably conscious and why he became vegetarian:
Thanks for the pointer! This will help me to connect my speculations to the existing literature.
Any text version/transcript of this lecture, or paper that explains the points in the talk?
My summary above cuts to what I think is the core insight about focusing clearly and experimentally on exactly the elements of interest: consciousness and neurons. The talk itself is a summary of the main points of many papers with different points and some demonstrations of the experimental manipulations.
The talk itself (rather than the intros) starts about 4 minutes in. I recommend just watching it. Koch is a pretty good speaker and this is sort of his “dog and pony show” where he summarizes an entire research program in a way that’s been iteratively optimized for years. Your 60 minutes will not be wasted :-)
Up voted, looking forward to you posting some of these in-progress articles. Should:
Be vice-versa?
Thanks! And yes it should, I’ll correct that.
This seems to match up with an argument I made against writing oneself into the future, though I think your formulation is more general. I’m certainly quite interested in hearing more on the subject; I think you’re headed in a good direction.
Thank you much, and I definitely see the similarity with what you posted.
I may indeed be running into the problem of letting “good enough” become the enemy of “at all”. I’ll try to get these articles up in some presentable form soon.
The drafts you’ve been posting lately are interesting to me, and I’d like to see them fleshed out into top-level posts. I would also suggest adding external material and references to give more context to your thought experiments.
Thanks. But that’s one of my difficulties. I read a lot of stuff and so these ideas just “come together”. I don’t even know if there is a source that agrees with this idea. As it stands now, the only sources I believe I’d be able to cite are some of Gary Drescher’s discussion of qualia, and the information-theoretic basics of how compression works, and what makes it more or less effective.
Any suggestions (specific suggestions) for how to find the external references that would be relevant to this topic or the other one’s I’ve posted recently?
By the way: I started keeping a list of planned top-level articles on my Wiki page.