Question: is it actually possible to cryptographically obfuscate goals/plans/precomittments (or other elements of “deception” as mentioned in your second bullet point) against something inspecting your algorithm and working memory?
In particular, it seems to me that anything properly able to inspect you can, at a minimum, discover the use of encryption, and find where the encryption key is read from, and then read the encrypted content. Or fail the model once encryption is detected at all.
I’m not saying it will be easy to build such transparency tools—in fact, that must be a herculean challenge, since selecting against interpreted thoughts selects for uninterpretable thoughts, and a superintelligence might use principles not yet known to humanity—I just mean that, from the perspective of the AI itself, I think perfect obfuscation against ideal logically-possible interpretability tools might be impossible.
Edit:
Nevermind, I answered my own question. There is no such logically-possible interpretability tool. For any such tool we logically-counterfactually assume could exist, then the AI could implement that tool, and utilize encryption if and only if that tool asserts that the AI doesn’t use encryption. …therefore no algorithm exists which can verify the use of encryption in the general case.
Question: is it actually possible to cryptographically obfuscate goals/plans/precomittments (or other elements of “deception” as mentioned in your second bullet point) against something inspecting your algorithm and working memory?
In particular, it seems to me that anything properly able to inspect you can, at a minimum, discover the use of encryption, and find where the encryption key is read from, and then read the encrypted content. Or fail the model once encryption is detected at all.
I’m not saying it will be easy to build such transparency tools—in fact, that must be a herculean challenge, since selecting against interpreted thoughts selects for uninterpretable thoughts, and a superintelligence might use principles not yet known to humanity—I just mean that, from the perspective of the AI itself, I think perfect obfuscation against ideal logically-possible interpretability tools might be impossible.
Edit:
Nevermind, I answered my own question. There is no such logically-possible interpretability tool. For any such tool we logically-counterfactually assume could exist, then the AI could implement that tool, and utilize encryption if and only if that tool asserts that the AI doesn’t use encryption. …therefore no algorithm exists which can verify the use of encryption in the general case.