To the extent that we are using the existence of humans with inner monologue as evidence for the suitability of this approach to AGI, I think it also serves as evidence for why that approach isn’t really safe.
As an example: If I try, I can control my inner monologue to some extent. For example, I could make my inner monologue about “officially approved actions” very loud. And run a secondary thread of thinking about “non-approved thoughts” in the background—“quieter” and in a more fragmented manner, and killing it off everytime it made it into the inner monologue.
Sure, there will be some traces. (A lot, even, I kinda suck at this :-).) But I think this already brings us outside of the realm of “interpretable by default” and into “a race between your interpretability and my cognitive steganography”.
Note that I, or most humans, didnt have a particular need to develop this exact type of cognitive steganography. (The “elephant in the brain” type of actions-thoughts inconsistency is not exactly this.) As a result, I think we would be able to get much much better very quickly if there was an incentive to do it. (Not talking evolutionary scale. More like “give me a weekend to play with this”.)
And while I have no incentive to develop these skills, the same cannot be said for a brain-like AI that we are (trying to) interpret as it thinks.
To the extent that we are using the existence of humans with inner monologue as evidence for the suitability of this approach to AGI, I think it also serves as evidence for why that approach isn’t really safe.
As an example: If I try, I can control my inner monologue to some extent. For example, I could make my inner monologue about “officially approved actions” very loud. And run a secondary thread of thinking about “non-approved thoughts” in the background—“quieter” and in a more fragmented manner, and killing it off everytime it made it into the inner monologue.
Sure, there will be some traces. (A lot, even, I kinda suck at this :-).) But I think this already brings us outside of the realm of “interpretable by default” and into “a race between your interpretability and my cognitive steganography”.
Note that I, or most humans, didnt have a particular need to develop this exact type of cognitive steganography. (The “elephant in the brain” type of actions-thoughts inconsistency is not exactly this.) As a result, I think we would be able to get much much better very quickly if there was an incentive to do it. (Not talking evolutionary scale. More like “give me a weekend to play with this”.) And while I have no incentive to develop these skills, the same cannot be said for a brain-like AI that we are (trying to) interpret as it thinks.