what are the cognitive causes of people talking about consciousness and qualia
Based on the rest of your comment, I’m guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they’ve had.
a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are
Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn’t that standard too high?
For example, he wrote:
I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.
If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right? Would (solidly) passing the mirror test be (almost) sufficient at this point? There’s a visual self-representation, and an attribution of the perception of the mark to this self-representation. What else would be needed?
Would it need to non-symbolically self-attribute consciousness generally, not just particular experiences? How would this work?
If the proposion is true, doesn’t this just plainly contradict our everyday experiences of consciousness? I can direct my attention towards things other than wondering whether or not I’m conscious (and towards things other than and unrelated to my inner monologue), while still being conscious, at least in a way that still matters to me that I wouldn’t want to dismiss. We can describe our experiences without wondering whether or not we’re having (or had) them.
it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are
What kinds of reasons? And what would being correct look like?
If unsymbolized self-attribution of consciousness is enough, how would we check just for it? The mirror test?
Based on the rest of your comment, I’m guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they’ve had.
If I were doing the exercise, all sorts of things would go in my “stuff people say about consciousness” list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn’t be able to tell if the color they saw as green was the one you saw as blue, and so on. You don’t need to be miserly about what you put on that list.
Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn’t that standard too high?
Mostly (on my model) because it’s not at all clear from the getgo that it’s meaningful to “be conscious” or “have qualia”; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where “being conscious” turns out to be a nonsense notion.
Note also that higher standards on the algorithm you’re supposed to produce are more conservative: if it is meanigful to say that an algorithm “is conscious”, then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the “being conscious” part, assuming such a thing exists).
I’d be pretty suspicious of someone who claimed to have a “conscious algorithm” if they couldn’t also say “and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary’s Room thought experiment for the same reason people do”. Once someone demonstrated that sort of mastery (and once I’d verified it by inspection of the algorithm, and integrated the insights therefrom), I’d be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates to the ability of the mind to feel, but the qualifying bar for “do you have a reductionist explanation of consciousness” is “can you show me how to build something that produces the observations we set out to explain in the first place (people talking about ‘consciousness’) fo rthe same cognitive reasons?”.
Note further that demonstrating an algorithm that produces the same sort of claims humans do (eg, claims about the redness of red) for the same cognitive reasons, is not the same thing as asserting that everything “with consciousness/qualia” must make similar claims.
If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?
My model of Eliezer says “In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong.” (And similarly with various other claims in that section.) My model continues: “You seem to me to be trying to do far more with the word ‘consciousness’ than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how ‘consciousness’ must behave, and more thinking about the cognitive causes behind the creation of the Mary’s Room hypothetical.”
What kinds of reasons?
My model says: “The list of reasons is not particularly small, in this case.”
And what would being correct look like?
“The claim is correct if the actual cognitive reasons for Searl inventing the Chinese Room hypothetical, are analogous to the cognitive reasons that the alleged algorithm invents the Chinese Room hypothetical, and so on and so forth.
“This is of course difficult to check directly. However, fairly strong evidence of correctness can be attained by reading the algorithm and imagining its execution. Just as you can stare at the gears of a watch until you understand how their interactions makes the watch-hands tick, at which point you can be justifiably confident that you understand the watch, you should be able to stare at a cognitive algorithm explaining ‘consciousness’ until you understand how its execution makes things like ‘inner listeners’ ‘experiencing redness’ (in a suitably rescued sense), at which point you can be justifiably confident that you understand experience.
“Your fellow tribemembers, who have not understood how gears can drive the hands of a watch, might doubt your claim, saying ‘There are many theories of how the watch works, ranging from internal gears to external solar radiation to the whims of the spirits. How are you so confident that it is the turning of little gears, nevermind this specific mechanism that you claim you can sketch out in the dirt?’. And you could rightly reply, ’When we unscrew the back, we see gears. And there is an arrangement of gears, that I understand, that by inspection would tick the hands in just the way we observe the hands to tick. And while I have not fully taken the watch apart, the visible features of the gears we can see when we unscrew the back, match the corresponding properties of my simple gear mechanism. This is enough for me to be pretty confident that something like my mechanism, which I understand and which clealry by inspection ticks watch-hands, governs the watch before us.”
Shouldn’t mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won’t claim the mirror test is the only one Eliezer cares about; I don’t know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they’re proxies for.
EY: I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious.
Me: If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right?
You: My model of Eliezer says “In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong.” (And similarly with various other claims in that section.) My model continues: “You seem to me to be trying to do far more with the word ‘consciousness’ than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how ‘consciousness’ must behave, and more thinking about the cognitive causes behind the creation of the Mary’s Room hypothetical.”
To make sure I understand correctly, it’s not the self-attribution of consciousness and other talk of consciousness like Mary’s Room that matter in themselves (we can allow some limited extra modules for that), but their cognitive causes. And certain (kinds of) cognitive causes should be present when we’re “reflective enough for consciousness”, right? And Eliezer isn’t sure whether wondering whether or not he’s conscious is among them (or a proxy/correlate of a necessary cause)?
Thanks, this is helpful.
Based on the rest of your comment, I’m guessing you mean talk about consciousness and qualia in the abstract and attribute them to themselves, not just talk about specific experiences they’ve had.
Why use the standard of claiming to be conscious/have qualia? That is one answer that gets at something that might matter, but why isn’t that standard too high?
For example, he wrote:
If this proposition is false, we need to allow unsymbolized (non-verbal) ways to self-attribute consciousness for self-attributing consciousness to matter in itself, right? Would (solidly) passing the mirror test be (almost) sufficient at this point? There’s a visual self-representation, and an attribution of the perception of the mark to this self-representation. What else would be needed?
Would it need to non-symbolically self-attribute consciousness generally, not just particular experiences? How would this work?
If the proposion is true, doesn’t this just plainly contradict our everyday experiences of consciousness? I can direct my attention towards things other than wondering whether or not I’m conscious (and towards things other than and unrelated to my inner monologue), while still being conscious, at least in a way that still matters to me that I wouldn’t want to dismiss. We can describe our experiences without wondering whether or not we’re having (or had) them.
What kinds of reasons? And what would being correct look like?
If unsymbolized self-attribution of consciousness is enough, how would we check just for it? The mirror test?
If I were doing the exercise, all sorts of things would go in my “stuff people say about consciousness” list, including stuff Searl says about chinese rooms, stuff Chalmers says about p-zombies, stuff the person on the street says about the ineffable intransmissible redness of red, stuff schoolyard kids say about how they wouldn’t be able to tell if the color they saw as green was the one you saw as blue, and so on. You don’t need to be miserly about what you put on that list.
Mostly (on my model) because it’s not at all clear from the getgo that it’s meaningful to “be conscious” or “have qualia”; the ability to write an algorithm that makes the same sort of observable-claims that we make, for the same cognitive reasons, demonstrates a mastery of the phenomenon even in situations where “being conscious” turns out to be a nonsense notion.
Note also that higher standards on the algorithm you’re supposed to produce are more conservative: if it is meanigful to say that an algorithm “is conscious”, then producing an algorithm that is both conscious, and claims to be so, for the same cognitive reasons we do, is a stronger demonstration of mastery than isolating just a subset of that algorithm (the “being conscious” part, assuming such a thing exists).
I’d be pretty suspicious of someone who claimed to have a “conscious algorithm” if they couldn’t also say “and if you inspect it, you can see how if you hook it up to this extra module here and initialize it this way, then it would output the Chinese Room argument for the same reasons Searl did, and if you instead initialize it that way, then it outputs the Mary’s Room thought experiment for the same reason people do”. Once someone demonstrated that sort of mastery (and once I’d verified it by inspection of the algorithm, and integrated the insights therefrom), I’d be much more willing to trust them (or to operate the newfound insights myself) on questions of how the ability to write philosophy papers about qualia relates to the ability of the mind to feel, but the qualifying bar for “do you have a reductionist explanation of consciousness” is “can you show me how to build something that produces the observations we set out to explain in the first place (people talking about ‘consciousness’) fo rthe same cognitive reasons?”.
Note further that demonstrating an algorithm that produces the same sort of claims humans do (eg, claims about the redness of red) for the same cognitive reasons, is not the same thing as asserting that everything “with consciousness/qualia” must make similar claims.
My model of Eliezer says “In lieu of an algorithmic account of the cognitive antecedents of people insisting they are conscious, that sort of claim is not even wrong.” (And similarly with various other claims in that section.) My model continues: “You seem to me to be trying to do far more with the word ‘consciousness’ than your understanding of the phenomenon permits. I recommend doing less abstract reasoning about how ‘consciousness’ must behave, and more thinking about the cognitive causes behind the creation of the Mary’s Room hypothetical.”
My model says: “The list of reasons is not particularly small, in this case.”
“The claim is correct if the actual cognitive reasons for Searl inventing the Chinese Room hypothetical, are analogous to the cognitive reasons that the alleged algorithm invents the Chinese Room hypothetical, and so on and so forth.
“This is of course difficult to check directly. However, fairly strong evidence of correctness can be attained by reading the algorithm and imagining its execution. Just as you can stare at the gears of a watch until you understand how their interactions makes the watch-hands tick, at which point you can be justifiably confident that you understand the watch, you should be able to stare at a cognitive algorithm explaining ‘consciousness’ until you understand how its execution makes things like ‘inner listeners’ ‘experiencing redness’ (in a suitably rescued sense), at which point you can be justifiably confident that you understand experience.
“Your fellow tribemembers, who have not understood how gears can drive the hands of a watch, might doubt your claim, saying ‘There are many theories of how the watch works, ranging from internal gears to external solar radiation to the whims of the spirits. How are you so confident that it is the turning of little gears, nevermind this specific mechanism that you claim you can sketch out in the dirt?’. And you could rightly reply, ’When we unscrew the back, we see gears. And there is an arrangement of gears, that I understand, that by inspection would tick the hands in just the way we observe the hands to tick. And while I have not fully taken the watch apart, the visible features of the gears we can see when we unscrew the back, match the corresponding properties of my simple gear mechanism. This is enough for me to be pretty confident that something like my mechanism, which I understand and which clealry by inspection ticks watch-hands, governs the watch before us.”
Shouldn’t mastery and self-awareness/self-modelling come in degrees? Is it necessary to be able to theorize and come up with all of the various thought experiments (even with limited augmentation from extra modules, different initializations)? Many nonhuman animals could make some of the kinds of claims we make about our particular conscious experiences for essentially similar reasons, and many demonstrate some self-awareness in ways other than by passing the mirror test (and some might pass a mirror test with a different sensory modality, or with some extra help, although some kinds of help would severely undermine a positive result), although I won’t claim the mirror test is the only one Eliezer cares about; I don’t know what else he has in mind. It would be helpful to see a list of the proxies he has in mind and what they’re proxies for.
To make sure I understand correctly, it’s not the self-attribution of consciousness and other talk of consciousness like Mary’s Room that matter in themselves (we can allow some limited extra modules for that), but their cognitive causes. And certain (kinds of) cognitive causes should be present when we’re “reflective enough for consciousness”, right? And Eliezer isn’t sure whether wondering whether or not he’s conscious is among them (or a proxy/correlate of a necessary cause)?