The team introduced a new approach to detect AI sandbagging, a form of deception where an AI model strategically underperforms during evaluation to hide its true capabilities. The assumption behind their project is that sandbagging is a more complex task than showing the true capabilities.
Earlier I suggested adding noise to the history/environmentwith a more RL rationale. Even if you don’t like my suggestion to noise the environment, is there any particular reason to add the noise to the model internals instead? You can easily think of ways in which the latter doesn’t work and is dependent on details of the internal model (eg. which noise, distributed how and applied where? What if the model was trained with heavy regularization to be robust to noise, perhaps with an adversarial robustness justification—and this is an adversarial setting—or for running on very energy-efficient hardware, up to the extreme of weight agnostic NNs?). Noising the history or the model’s actions/samples seems more blackbox.
Earlier I suggested adding noise to the history/environmentwith a more RL rationale. Even if you don’t like my suggestion to noise the environment, is there any particular reason to add the noise to the model internals instead? You can easily think of ways in which the latter doesn’t work and is dependent on details of the internal model (eg. which noise, distributed how and applied where? What if the model was trained with heavy regularization to be robust to noise, perhaps with an adversarial robustness justification—and this is an adversarial setting—or for running on very energy-efficient hardware, up to the extreme of weight agnostic NNs?). Noising the history or the model’s actions/samples seems more blackbox.