So if I understand correctly, your basic claim underlying all of this is that a system can be said not to be conscious if its set of beliefs remains equally valid when you switch the labels on some of the things it has beliefs about. I have a few concerns about this point, which you may have already considered, but which I would like to see addressed explicitly. I will post them as replies to this post.
If I am mischaracterizing your position, please let me know, and then my replies to this post can probably be ignored.
Doesn’t this fail independence of irrelevant alternatives? That is to say, couldn’t I take a conscious system and augment it with two atoms, then add one fact about each atom such that switching the labels on the two atoms maintains the truth of those facts? It seems to me that in that case, the system would be provably unconscious, which does not accord with my intuition.
Yes; I mentioned that in the full version. The brain is full of information that we’re not conscious of. This is necessarily so when you have regions of the graph of K with low connectivity. A more complete analysis would look for uniquely-grounded subsets of K. For example, it’s plausible that infants thrashing their arms around blindly have knowledge in their brains about where there arms are and how to move them, but are not conscious of that knowledge; but are conscious of simpler sensation.
What does this all mean physically? You talk about a symbolic reasoning system consisting of logic assertions and such, but any symbolic reasoning system ultimately has to be made out of atoms. How can I look at one lump of atoms and tell that it’s a symbolic reasoning system, and another lump and tell that it’s just random junk?
You can’t, because you can interpret any system as a symbolic reasoning system. You don’t need to ask whether a system is a symbolic reasoning system; you need to ask whether it’s conscious.
How can one grounding be falsifiable and another not, and the two groundings still be entirely indistinguishable? If there is a difference, shouldn’t there be some difference? How would they flicker back and forth, as you say, like a Necker cube? Wouldn’t there be some truth of the matter?
I don’t think they can. I wanted to accommodate people who believe that qualia are part of groundings, and that you would have a different grounding if you swapped the experience of blue with the experience of red, rather than argue about it.
That’s how I used to phrase it, but now I would say instead that you switch what the things are mapped to. I think of the labels themselves as qualia, so that switching just the labels would be like switching the experience of “blue” with the experience of “red”.
So if I understand correctly, your basic claim underlying all of this is that a system can be said not to be conscious if its set of beliefs remains equally valid when you switch the labels on some of the things it has beliefs about. I have a few concerns about this point, which you may have already considered, but which I would like to see addressed explicitly. I will post them as replies to this post.
If I am mischaracterizing your position, please let me know, and then my replies to this post can probably be ignored.
Doesn’t this fail independence of irrelevant alternatives? That is to say, couldn’t I take a conscious system and augment it with two atoms, then add one fact about each atom such that switching the labels on the two atoms maintains the truth of those facts? It seems to me that in that case, the system would be provably unconscious, which does not accord with my intuition.
Yes; I mentioned that in the full version. The brain is full of information that we’re not conscious of. This is necessarily so when you have regions of the graph of K with low connectivity. A more complete analysis would look for uniquely-grounded subsets of K. For example, it’s plausible that infants thrashing their arms around blindly have knowledge in their brains about where there arms are and how to move them, but are not conscious of that knowledge; but are conscious of simpler sensation.
What does this all mean physically? You talk about a symbolic reasoning system consisting of logic assertions and such, but any symbolic reasoning system ultimately has to be made out of atoms. How can I look at one lump of atoms and tell that it’s a symbolic reasoning system, and another lump and tell that it’s just random junk?
You can’t, because you can interpret any system as a symbolic reasoning system. You don’t need to ask whether a system is a symbolic reasoning system; you need to ask whether it’s conscious.
How can one grounding be falsifiable and another not, and the two groundings still be entirely indistinguishable? If there is a difference, shouldn’t there be some difference? How would they flicker back and forth, as you say, like a Necker cube? Wouldn’t there be some truth of the matter?
I don’t think they can. I wanted to accommodate people who believe that qualia are part of groundings, and that you would have a different grounding if you swapped the experience of blue with the experience of red, rather than argue about it.
That’s how I used to phrase it, but now I would say instead that you switch what the things are mapped to. I think of the labels themselves as qualia, so that switching just the labels would be like switching the experience of “blue” with the experience of “red”.