What about internal-state-tracking is necessary for creating the mysterious redness of red exactly, or the hurt-iness of pain?
Well, as you note, the only time we notice these things is when we self-model, and they otherwise have no causal effect on reality; a mind that doesn’t self-reflect is not affected by them. So… that can only mean they only exist when we self-reflect.
Say the summary is a single bit that’s either 0, “no pain right now”, or 1, “pain active”. What makes that bit feel hurty to the network, instead of, say, looking red, or smelling sweet, or any other qualia?
Mm, the summary-interpretation mechanism? Imagine if instead of an eye, you had a binary input, and the brain was hard-wired to parse “0” from this input as a dog picture, and “1″ as a cat picture. So you perceive 1, the signal travels to the brain, enters the pre-processing machinery, that machinery retrieves the cat picture, and shoves it into the visual input of your planner-part, claiming it’s what the binary organ perceives.
Similarly, the binary pain channel you’re describing would retrieve some hard-coded idea of how “I’m in pain” is meant to feel, convert it into a format the planner can parse, put it into some specialized input channel, and the planner would make decisions based on that. This would, of course, not be the rich and varied and context-dependent sense of pain we have — it would be, well, binary, always feeling the same.
Great questions!
Well, as you note, the only time we notice these things is when we self-model, and they otherwise have no causal effect on reality; a mind that doesn’t self-reflect is not affected by them. So… that can only mean they only exist when we self-reflect.
Mm, the summary-interpretation mechanism? Imagine if instead of an eye, you had a binary input, and the brain was hard-wired to parse “0” from this input as a dog picture, and “1″ as a cat picture. So you perceive 1, the signal travels to the brain, enters the pre-processing machinery, that machinery retrieves the cat picture, and shoves it into the visual input of your planner-part, claiming it’s what the binary organ perceives.
Similarly, the binary pain channel you’re describing would retrieve some hard-coded idea of how “I’m in pain” is meant to feel, convert it into a format the planner can parse, put it into some specialized input channel, and the planner would make decisions based on that. This would, of course, not be the rich and varied and context-dependent sense of pain we have — it would be, well, binary, always feeling the same.