When doing sandwiching experiments, a key property of your “amplification strategy” (i.e. the method you use to help the human complete the task) should only help the person complete the task correctly.
For example, lets say you have a language model give arguments for why a certain answer to a question is correct. This is fine, but we don’t want it to be the case that the system is also capable of convincing the person of an incorrect answer. In this example, you can easily evaluate this, by prompting or finetuning the model to argue for incorrect answers, and seeing if people also believe the incorrect arguments. This description of the problem/task naturally leads to debate, where the key property we want is for models arguing for correct answers to win more often than models arguing for incorrect answers. But even if you aren’t using debate, to evaluate a sandwiching attempt, you need to compare it to the case where you’re using the strategy to try and convince the person of an incorrect answer.
When doing sandwiching experiments, a key property of your “amplification strategy” (i.e. the method you use to help the human complete the task) should only help the person complete the task correctly.
For example, lets say you have a language model give arguments for why a certain answer to a question is correct. This is fine, but we don’t want it to be the case that the system is also capable of convincing the person of an incorrect answer. In this example, you can easily evaluate this, by prompting or finetuning the model to argue for incorrect answers, and seeing if people also believe the incorrect arguments. This description of the problem/task naturally leads to debate, where the key property we want is for models arguing for correct answers to win more often than models arguing for incorrect answers. But even if you aren’t using debate, to evaluate a sandwiching attempt, you need to compare it to the case where you’re using the strategy to try and convince the person of an incorrect answer.