even an incredibly sophisticated deceptive model which is impossible to detect via the outputs may be easy to detect via interpretability tools (analogy—if I knew that sophisticated aliens were reading my mind, I have no clue how to think deceptive thoughts in a way that evades their tools!)
It seems to me that your analogy is the wrong way arond. IE, the right analogy would be “if I knew that a bunch of 5-year olds were reading my mind, I have...actually, a pretty good idea how to think deceptive thoughts in a way that avoids their tools”.
(For what it’s worth, I am not very excited about interpretability as an auditing tool. This—ie, that powerful AIs might avoid this—is one half of the reason. The other half is that I am sceptical that we will take audit warnings seriously enough—ie, we might ignore any scheming that falls short of “clear-cut example of workable plan to kill many people”. EG, ignore things like this. Or we might even decide to “fix” these issues by putting the interpretability in the loss function, and just deploying once the loss goes down.)
If I could assume things like “they are much better at reading my inner monologue than my non-verbal thoughts”, then I could create code words for prohibited things.
I could think in words they don’t know.
I could think in complicated concepts they haven’t understood yet. Or references to events, or my memories, that they don’t know.
I could leave a part of my plans implicit, and only figure out the details later.
I could harm them through some action for which they won’t understand that it is harmful, so they might not be alarmed even if they catch me thinking it. (Leaving a gas stove on with no fire.)
And then there are the more boring things, like if you know more details about how the mind-reading works, you can try to defeat it. (Make the evil plans at night when they are asleep, or when the shift is changing, etc.)
It seems to me that your analogy is the wrong way arond. IE, the right analogy would be “if I knew that a bunch of 5-year olds were reading my mind, I have...actually, a pretty good idea how to think deceptive thoughts in a way that avoids their tools”.
(For what it’s worth, I am not very excited about interpretability as an auditing tool. This—ie, that powerful AIs might avoid this—is one half of the reason. The other half is that I am sceptical that we will take audit warnings seriously enough—ie, we might ignore any scheming that falls short of “clear-cut example of workable plan to kill many people”. EG, ignore things like this. Or we might even decide to “fix” these issues by putting the interpretability in the loss function, and just deploying once the loss goes down.)
How would you evade their tools?
If I could assume things like “they are much better at reading my inner monologue than my non-verbal thoughts”, then I could create code words for prohibited things.
I could think in words they don’t know.
I could think in complicated concepts they haven’t understood yet. Or references to events, or my memories, that they don’t know.
I could leave a part of my plans implicit, and only figure out the details later.
I could harm them through some action for which they won’t understand that it is harmful, so they might not be alarmed even if they catch me thinking it. (Leaving a gas stove on with no fire.)
And then there are the more boring things, like if you know more details about how the mind-reading works, you can try to defeat it. (Make the evil plans at night when they are asleep, or when the shift is changing, etc.)
(Also, I assume you know Circumventing interpretability: How to defeat mind-readers, but mentioning it just in case.)