In last year’s essay about my research agenda I wrote about an approach I call “learning by teaching” (LBT). In LBT, an AI is learning human values by trying to give advice to a human and seeing how the human changes eir behavior (without an explicit reward signal). Roughly speaking, if the human permanently changes eir behavior as a result of the advice, then one can assume the advice was useful. Partial protection against manipulative advice is provided by a delegation mechanism, which ensures the AI only produces advice that is in the space of “possible pieces of advice a human could give” in some sense. However, this protection seems insufficient since it allows for giving all arguments in favor of a position without giving any arguments against a position.
To add more defense against manipulation, I propose to build on the “AI debate” idea. However, in this scheme, we don’t need more than one AI. In fact, this is a general fact: for any protocol P involving multiple AIs, there is a protocol Q involving just one AI that works (at least roughly, qualitatively) just as well. Proof sketch: If we can prove that under assumptions X, the protocol P is safe/effective, then we can design a single AI Q which has assumptions Xbaked into its prior. Such an AI would be able to understand that simulating protocol P would lead to a safe/effective outcome, and would only choose a different strategy if it leads to an even better outcome under the same assumptions.
The way we use “AI debate” is not by implementing an actual AI debate. Instead, we use it to formalize our assumptions about human behavior. In ordinary IRL, the usual assumption is “a human is a (nearly) optimal agent for eir utility function”. In the original version of LBT, the assumption was of the form “a human is (nearly) optimal when receiving optimal advice”. In debate-LBT the assumption becomes “a human is (nearly) optimal* when exposed to a debate between two agents at least one of which is giving optimal advice”. Here, the human observes this hypothetical debate through the same “cheap talk” channel through which it receives advice from the single AI.
Notice that debate can be considered to be a form of interactive proof system (with two or more provers). However, the requirements are different from classical proof systems. In classical theory, the requirement is “When the prover is honestly arguing for a correct proposition, the verifier is convinced. For any prover the verifier cannot be convinced of a false proposition.” In “debate proof systems” the requirement is “If at least one prover is honest, the verifier comes to the correct conclusion”. That is, we don’t guarantee anything when both provers are dishonest. It is easy to see that these debate proof systems admit any problem in PSPACE: given a game, both provers can state their assertions as to which side wins the game, and if they disagree they have to play the game for the corresponding sides.
*Fiddling with the assumptions a little, instead of “optimal” we can probably just say that the AI is guaranteed to achieve this level of performance, what it is.
In last year’s essay about my research agenda I wrote about an approach I call “learning by teaching” (LBT). In LBT, an AI is learning human values by trying to give advice to a human and seeing how the human changes eir behavior (without an explicit reward signal). Roughly speaking, if the human permanently changes eir behavior as a result of the advice, then one can assume the advice was useful. Partial protection against manipulative advice is provided by a delegation mechanism, which ensures the AI only produces advice that is in the space of “possible pieces of advice a human could give” in some sense. However, this protection seems insufficient since it allows for giving all arguments in favor of a position without giving any arguments against a position.
To add more defense against manipulation, I propose to build on the “AI debate” idea. However, in this scheme, we don’t need more than one AI. In fact, this is a general fact: for any protocol P involving multiple AIs, there is a protocol Q involving just one AI that works (at least roughly, qualitatively) just as well. Proof sketch: If we can prove that under assumptions X, the protocol P is safe/effective, then we can design a single AI Q which has assumptions X baked into its prior. Such an AI would be able to understand that simulating protocol P would lead to a safe/effective outcome, and would only choose a different strategy if it leads to an even better outcome under the same assumptions.
The way we use “AI debate” is not by implementing an actual AI debate. Instead, we use it to formalize our assumptions about human behavior. In ordinary IRL, the usual assumption is “a human is a (nearly) optimal agent for eir utility function”. In the original version of LBT, the assumption was of the form “a human is (nearly) optimal when receiving optimal advice”. In debate-LBT the assumption becomes “a human is (nearly) optimal* when exposed to a debate between two agents at least one of which is giving optimal advice”. Here, the human observes this hypothetical debate through the same “cheap talk” channel through which it receives advice from the single AI.
Notice that debate can be considered to be a form of interactive proof system (with two or more provers). However, the requirements are different from classical proof systems. In classical theory, the requirement is “When the prover is honestly arguing for a correct proposition, the verifier is convinced. For any prover the verifier cannot be convinced of a false proposition.” In “debate proof systems” the requirement is “If at least one prover is honest, the verifier comes to the correct conclusion”. That is, we don’t guarantee anything when both provers are dishonest. It is easy to see that these debate proof systems admit any problem in PSPACE: given a game, both provers can state their assertions as to which side wins the game, and if they disagree they have to play the game for the corresponding sides.
*Fiddling with the assumptions a little, instead of “optimal” we can probably just say that the AI is guaranteed to achieve this level of performance, what it is.