With optimisation-robust I mean that it withstands point 27 from AGI Ruin:
When you explicitly optimize against a detector of unaligned thoughts, you’re partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect. Optimizing against an interpreted thought optimizes against interpretability.
Are you aware of any person or group that is working expressly on countering this failure mode?
[Question] Is anyone developing optimisation-robust interpretability methods?
With optimisation-robust I mean that it withstands point 27 from AGI Ruin:
Are you aware of any person or group that is working expressly on countering this failure mode?