You can use the first X% of scenarios at deployment time and audit them with humans.
Only if you don’t have privacy guarantees?
Also, for the honeypots, you can potentially generate vast numbers of honeypots, use a trusted model to review outputs, and then only have humans look at the cases where the trusted model thinks the output is suspicious.
Your control protocols should be able to prevent the best advisory you can simulate from breaking them.
Surely this will be unnecessarily conservative a bunch of the time?
In the limit, the red team could provide a human that reads every task, sits and thinks for hours, and then responds with the best output. This isn’t done in practice since it is expensive and slow.
I agree if there are privacy guarantees, then you can’t get data from the real world, so you’d need honeypots. I guess I’m much more scared of the internal deployments where I don’t think privacy guarantees are as likely. I’m trying to respond to someone who is like, “Control seems hard because you have to generate real-world looking input, and that seems hard” and I’m like “You can just use the real-world inputs and filter then with a monitor to find the most scary ones and perform auditing”
Yeah, I guess I’m saying that if you don’t have a great way to elicit the capabilities from the model, you can substitute that by assuming that you are going against the best possible adversary you can make. This is much more conservative, but I don’t think it makes control impossible to implement. In practice, I think it looks like finding the maximum capabilities of the model using your Elecitation Strategy, then boosting its capabilities even more with a scaffold and some clever red team tricks.
Only if you don’t have privacy guarantees?
Also, for the honeypots, you can potentially generate vast numbers of honeypots, use a trusted model to review outputs, and then only have humans look at the cases where the trusted model thinks the output is suspicious.
Surely this will be unnecessarily conservative a bunch of the time?
Or it is overly conservative?
You might not need privacy guarantees for internal deployments ofc
I agree if there are privacy guarantees, then you can’t get data from the real world, so you’d need honeypots. I guess I’m much more scared of the internal deployments where I don’t think privacy guarantees are as likely. I’m trying to respond to someone who is like, “Control seems hard because you have to generate real-world looking input, and that seems hard” and I’m like “You can just use the real-world inputs and filter then with a monitor to find the most scary ones and perform auditing”
Yeah, I guess I’m saying that if you don’t have a great way to elicit the capabilities from the model, you can substitute that by assuming that you are going against the best possible adversary you can make. This is much more conservative, but I don’t think it makes control impossible to implement. In practice, I think it looks like finding the maximum capabilities of the model using your Elecitation Strategy, then boosting its capabilities even more with a scaffold and some clever red team tricks.