I’m glad to see this. Some initial thoughts about the control safety case:
I don’t get how the “monitoring game” is different from the red-team game? Is it because the blue team is limited to only considering protocols that use just monitoring? Or do you not expect to play a full red-team simulation (this would be worrisome)
Option 2 (honeypotting) feels doomed (in isolation). This primarily stems from my belief that making situations that are indistinguishable from deployment is probably hard—the model has control over state in very broad and hard-to-predict ways, and this can implicitly encode that it is in deployment (for instance, the model can see code that it itself has written and would’ve taken a long time to write, and realize it has likely been deployed for some time). You can still run honeypotting tests using this newer distribution, but then you have to hope that you are running honeypotting frequently enough to catch when the model is confident it is in deployment.
The elicitation problem, and the broader problem of justifying that the red-blue teaming has been sufficient, indeed feel quite hard. This is one reason as to why you might expect that control cases can only work for human-level AI—if your ‘elicitation protocol’ just ends up being ‘use attack strategies made primarily by humans’.
There are related problems in justifying how many affordances to give to the red team. For instance, decisions related to if the red team should be able to give/train into the model ‘coordination signals’ are likely important, but might not be easy to make. Past research has been maximally conservative, but this might not be realistic.
The limitation that you are measuring P(catastrophe | misaligned) rather than just P(catastrophe) is fairly large, and I would be interested in discussion on how important this is to others. More broadly, I don’t feel great about our chances of bounding P(catastrophe) for reasons such as this.
I’m glad to see this. Some initial thoughts about the control safety case:
I don’t get how the “monitoring game” is different from the red-team game? Is it because the blue team is limited to only considering protocols that use just monitoring? Or do you not expect to play a full red-team simulation (this would be worrisome)
Option 2 (honeypotting) feels doomed (in isolation). This primarily stems from my belief that making situations that are indistinguishable from deployment is probably hard—the model has control over state in very broad and hard-to-predict ways, and this can implicitly encode that it is in deployment (for instance, the model can see code that it itself has written and would’ve taken a long time to write, and realize it has likely been deployed for some time). You can still run honeypotting tests using this newer distribution, but then you have to hope that you are running honeypotting frequently enough to catch when the model is confident it is in deployment.
The elicitation problem, and the broader problem of justifying that the red-blue teaming has been sufficient, indeed feel quite hard. This is one reason as to why you might expect that control cases can only work for human-level AI—if your ‘elicitation protocol’ just ends up being ‘use attack strategies made primarily by humans’.
There are related problems in justifying how many affordances to give to the red team. For instance, decisions related to if the red team should be able to give/train into the model ‘coordination signals’ are likely important, but might not be easy to make. Past research has been maximally conservative, but this might not be realistic.
The limitation that you are measuring P(catastrophe | misaligned) rather than just P(catastrophe) is fairly large, and I would be interested in discussion on how important this is to others. More broadly, I don’t feel great about our chances of bounding P(catastrophe) for reasons such as this.