An update on Sandbagging Detection through Model Degradation:
We have confirmed the technique to work on not only prompted models, but also on models fine-tuned to underperform.
It improves upon the two relevant techniques used to ensure capability evaluations are robust: Behavioral RL Fine-Tuning and Behavioral I.I.D. Fine-Tuning. Unlike Behavioral RL Fine-Tuning, it is not subject to exploration hacking. Unlike Behavioral I.I.D., one only needs to be able to evaluate the behavior, rather than manually creating examples for fine-tuning. Additionally, it is OOM less computationally expensive and could be implemented at minor checkpoints during the training process. For a model to subvert this technique, it would need to participate in gradient hacking such that sandbagging is more robust to noise impairment than the general capabilities are.
A downside is in the current implementation we are unable to extract the full performance of the base-model, and thus it is currently being developed as a red-flag indicator rather than a performance maximizer.
The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.
Earlier I suggested adding noise to the history/environmentwith a more RL rationale. Even if you don’t like my suggestion to noise the environment, is there any particular reason to add the noise to the model internals instead? You can easily think of ways in which the latter doesn’t work and is dependent on details of the internal model (eg. which noise, distributed how and applied where? What if the model was trained with heavy regularization to be robust to noise, perhaps with an adversarial robustness justification—and this is an adversarial setting—or for running on very energy-efficient hardware, up to the extreme of weight agnostic NNs?). Noising the history or the model’s actions/samples seems more blackbox.
An update on Sandbagging Detection through Model Degradation:
We have confirmed the technique to work on not only prompted models, but also on models fine-tuned to underperform.
It improves upon the two relevant techniques used to ensure capability evaluations are robust: Behavioral RL Fine-Tuning and Behavioral I.I.D. Fine-Tuning. Unlike Behavioral RL Fine-Tuning, it is not subject to exploration hacking. Unlike Behavioral I.I.D., one only needs to be able to evaluate the behavior, rather than manually creating examples for fine-tuning. Additionally, it is OOM less computationally expensive and could be implemented at minor checkpoints during the training process. For a model to subvert this technique, it would need to participate in gradient hacking such that sandbagging is more robust to noise impairment than the general capabilities are.
A downside is in the current implementation we are unable to extract the full performance of the base-model, and thus it is currently being developed as a red-flag indicator rather than a performance maximizer.
Earlier I suggested adding noise to the history/environmentwith a more RL rationale. Even if you don’t like my suggestion to noise the environment, is there any particular reason to add the noise to the model internals instead? You can easily think of ways in which the latter doesn’t work and is dependent on details of the internal model (eg. which noise, distributed how and applied where? What if the model was trained with heavy regularization to be robust to noise, perhaps with an adversarial robustness justification—and this is an adversarial setting—or for running on very energy-efficient hardware, up to the extreme of weight agnostic NNs?). Noising the history or the model’s actions/samples seems more blackbox.