Here are some techniques that could be used in practice if no technical safety progress was made but we were working with powerful models. (I’m not including some of the stuff you already said)
In practice, my recommendation would be to use sandwiching and meta-level adversarial evaluations to iterate on various scalable oversight proposals.
For preventing high-stakes failures:
Adversarial training, probably using model-generated inputs, as you noted.
Using trusted dumb models to review the output of smart models. E.g. every time GPT-6 produces a bash command, you have GPT-4 read the whole context and flag it if it thinks the bash command is suspicious; when flagged, the command is reviewed by some slower catastrophe prevention process.
Using smart models to review their own outputs for catastrophicness, combined with various tricks to make collusion harder.
Limiting the power of smart models. E.g. if you’re worried about GPT-6 backdooring your code but you still want to get the benefit of it being smarter about code than GPT-4, you might have these models write code by GPT-6 writing instructions for the code that are then implemented by GPT-4. The hope here is that it’s easier for 6 to backdoor code than to write code-writing instructions that are themselves backdoored.
another proposal here is to have GPT-4 write the first draft of code and have GPT-6 write diffs.
For all of these, you can do various paraphrasing and censorship tricks to make it harder for the models to collude. See this ancient Paul Christiano post for descriptions of a bunch of ideas for techniques here.
Redwood is hoping to release a paper evaluating many of these techniques in the code-backdooring setting in a month or so, DM me if you want to read an early draft.
Here are some techniques that could be used in practice if no technical safety progress was made but we were working with powerful models. (I’m not including some of the stuff you already said)
For constructing powerful reward signals:
Measuring Progress on Scalable Oversight for Large Language Models
In practice, my recommendation would be to use sandwiching and meta-level adversarial evaluations to iterate on various scalable oversight proposals.
For preventing high-stakes failures:
Adversarial training, probably using model-generated inputs, as you noted.
Using trusted dumb models to review the output of smart models. E.g. every time GPT-6 produces a bash command, you have GPT-4 read the whole context and flag it if it thinks the bash command is suspicious; when flagged, the command is reviewed by some slower catastrophe prevention process.
Using smart models to review their own outputs for catastrophicness, combined with various tricks to make collusion harder.
Limiting the power of smart models. E.g. if you’re worried about GPT-6 backdooring your code but you still want to get the benefit of it being smarter about code than GPT-4, you might have these models write code by GPT-6 writing instructions for the code that are then implemented by GPT-4. The hope here is that it’s easier for 6 to backdoor code than to write code-writing instructions that are themselves backdoored.
another proposal here is to have GPT-4 write the first draft of code and have GPT-6 write diffs.
For all of these, you can do various paraphrasing and censorship tricks to make it harder for the models to collude. See this ancient Paul Christiano post for descriptions of a bunch of ideas for techniques here.
Redwood is hoping to release a paper evaluating many of these techniques in the code-backdooring setting in a month or so, DM me if you want to read an early draft.