It seems to be taken for granted here that self-awareness=qualia. If something is self-aware and talking or thinking about how it has qualia, that sure is evidence of it having qualia, but I’m not sure the reverse direction holds. What about internal-state-tracking is necessary for creating the mysterious redness of red exactly, or the hurt-iness of pain?
I can see how pain as defined above the spoiler section doesn’t necessarily lead to pain qualia, and in many simple architectures obviously doesn’t, but I don’t see how processing a summary of pain effects on the network does lead to it. Say the summary is a single bit that’s either 0, “no pain right now”, or 1, “pain active”. What makes that bit feel hurty to the network, instead of, say, looking red, or smelling sweet, or any other qualia? I don’t feel any more able to answer these questions after adding the hypothesis “self-awareness necessary” to my model of the situation.
My mind sure agrees that it’s kind of suspicious how these mysterious qualia thingies only ever seem to exert a direct influence on the world when agents engage in modelling and introspection about them, and maybe that’s hinting that self-awareness is, or causes, qualia somehow. But I’ve never gotten further than this vague intuition in justifying or modelling the connection.
What about internal-state-tracking is necessary for creating the mysterious redness of red exactly, or the hurt-iness of pain?
Well, as you note, the only time we notice these things is when we self-model, and they otherwise have no causal effect on reality; a mind that doesn’t self-reflect is not affected by them. So… that can only mean they only exist when we self-reflect.
Say the summary is a single bit that’s either 0, “no pain right now”, or 1, “pain active”. What makes that bit feel hurty to the network, instead of, say, looking red, or smelling sweet, or any other qualia?
Mm, the summary-interpretation mechanism? Imagine if instead of an eye, you had a binary input, and the brain was hard-wired to parse “0” from this input as a dog picture, and “1″ as a cat picture. So you perceive 1, the signal travels to the brain, enters the pre-processing machinery, that machinery retrieves the cat picture, and shoves it into the visual input of your planner-part, claiming it’s what the binary organ perceives.
Similarly, the binary pain channel you’re describing would retrieve some hard-coded idea of how “I’m in pain” is meant to feel, convert it into a format the planner can parse, put it into some specialized input channel, and the planner would make decisions based on that. This would, of course, not be the rich and varied and context-dependent sense of pain we have — it would be, well, binary, always feeling the same.
It seems to be taken for granted here that self-awareness=qualia. If something is self-aware and talking or thinking about how it has qualia, that sure is evidence of it having qualia, but I’m not sure the reverse direction holds. What about internal-state-tracking is necessary for creating the mysterious redness of red exactly, or the hurt-iness of pain?
I can see how pain as defined above the spoiler section doesn’t necessarily lead to pain qualia, and in many simple architectures obviously doesn’t, but I don’t see how processing a summary of pain effects on the network does lead to it. Say the summary is a single bit that’s either 0, “no pain right now”, or 1, “pain active”. What makes that bit feel hurty to the network, instead of, say, looking red, or smelling sweet, or any other qualia? I don’t feel any more able to answer these questions after adding the hypothesis “self-awareness necessary” to my model of the situation.
My mind sure agrees that it’s kind of suspicious how these mysterious qualia thingies only ever seem to exert a direct influence on the world when agents engage in modelling and introspection about them, and maybe that’s hinting that self-awareness is, or causes, qualia somehow. But I’ve never gotten further than this vague intuition in justifying or modelling the connection.
Great questions!
Well, as you note, the only time we notice these things is when we self-model, and they otherwise have no causal effect on reality; a mind that doesn’t self-reflect is not affected by them. So… that can only mean they only exist when we self-reflect.
Mm, the summary-interpretation mechanism? Imagine if instead of an eye, you had a binary input, and the brain was hard-wired to parse “0” from this input as a dog picture, and “1″ as a cat picture. So you perceive 1, the signal travels to the brain, enters the pre-processing machinery, that machinery retrieves the cat picture, and shoves it into the visual input of your planner-part, claiming it’s what the binary organ perceives.
Similarly, the binary pain channel you’re describing would retrieve some hard-coded idea of how “I’m in pain” is meant to feel, convert it into a format the planner can parse, put it into some specialized input channel, and the planner would make decisions based on that. This would, of course, not be the rich and varied and context-dependent sense of pain we have — it would be, well, binary, always feeling the same.