I tend to agree with Tim Tyler that the “common” interpretation of consciousness is simpler and the signaling thing is not necessary. I realize that you are trying to increase the scope of the theory, but I am not convinced yet that the cure is better than the illness.
While I could see why an ape trying to break into “polite society” might want to gain the facility you describe, the apes created the “polite society” in the first place, I do not see a plausible solution to the catch 22 (perhaps it’s a lack of imagination).
You raise the question of U not being able to be a complete hypocrite, therefore “inventing” C, who is good at lying to itself. But wouldn’t others notice that C is lying to itself, and remains largely a hypocrite? If the achievement of C is being more cooperative, why doesn’t U just skip the BS and become more cooperative? (I actually think this last point might be answerable, the key observation being that C operates on a logical, verbal level. This allows it to be predictably consistent if certain specific situations, such as “if friend do this”, which is very important in solving the kinds of game-theoretic scenarios you describe. Giving cooperation over to C, rather than “making U more cooperative” creates consistency, which is essential. I think you might have hinted at this.)
ETA: the theory might be more palatable if the issues of consciousness and “public relation function” are separated along byrnema’s lines (but perhaps clearer).
Regarding your second two points, the idea of signalling games is that as long as C has some influence on your behavior, others can deduce from your apparent trustworthiness, altruism, etc., that you are at least somewhat trustworthy, etc. If you did away with C and simply made your U more trustworthy, you would seem less trustworthy than someone with a C, and other agents in the signalling game would assume that you have a C, but your U is unusually untrustworthy. So there’s an incentive to be partially hypocritical.
Interesting theory.
I tend to agree with Tim Tyler that the “common” interpretation of consciousness is simpler and the signaling thing is not necessary. I realize that you are trying to increase the scope of the theory, but I am not convinced yet that the cure is better than the illness.
While I could see why an ape trying to break into “polite society” might want to gain the facility you describe, the apes created the “polite society” in the first place, I do not see a plausible solution to the catch 22 (perhaps it’s a lack of imagination).
You raise the question of U not being able to be a complete hypocrite, therefore “inventing” C, who is good at lying to itself. But wouldn’t others notice that C is lying to itself, and remains largely a hypocrite? If the achievement of C is being more cooperative, why doesn’t U just skip the BS and become more cooperative? (I actually think this last point might be answerable, the key observation being that C operates on a logical, verbal level. This allows it to be predictably consistent if certain specific situations, such as “if friend do this”, which is very important in solving the kinds of game-theoretic scenarios you describe. Giving cooperation over to C, rather than “making U more cooperative” creates consistency, which is essential. I think you might have hinted at this.)
ETA: the theory might be more palatable if the issues of consciousness and “public relation function” are separated along byrnema’s lines (but perhaps clearer).
Regarding your second two points, the idea of signalling games is that as long as C has some influence on your behavior, others can deduce from your apparent trustworthiness, altruism, etc., that you are at least somewhat trustworthy, etc. If you did away with C and simply made your U more trustworthy, you would seem less trustworthy than someone with a C, and other agents in the signalling game would assume that you have a C, but your U is unusually untrustworthy. So there’s an incentive to be partially hypocritical.