If T=0, use your best guess after looking at what the debaters said.
If T=N+1 and no debater challenges any of their opponent’s statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.
Of course this assumption won’t be valid at the beginning of training. And even at the end of training we really only know something weaker like: “Neither debater thinks they would win by a significant expected margin in a length N debate.”
What can you infer if you see answers A and B to a question and know that both of them are defensible (in expectation) in a depth-N debate? That’s basically the open research question, with the hope being that you inductively make stronger and stronger inferences for larger N.
(This is very similar to asking when iterated amplification produces a good answer, up to the ambiguity about how you sample questions in amplification.)
(When we actually give judges instructions for now we just tell them to assume that both debater’s answers are reasonable. If one debater gives arguments where the opposite claim would also be “reasonable,” and the other debater gives arguments that are simple enough to be conclusively supported with the available depth, then the more helpful debater usually wins. Overall I don’t think that precision about this is a bottleneck right now.)
If T=N+1 and no debater challenges any of their opponent’s statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.
Do you mean that every debater could have defended each of their statements s in a debate which lasted an additional N steps after s was made?
What happens if some statements are challenged? And what exactly does it mean to defend statements from a challenge? I get the feeling you’re suggesting something similar to the high school debate rule (which I rejected but didn’t analyze very much), where unrefuted statements are assumed to be established (unless patently false), refutations are assumed decisive unless they themselves are refuted, etc.
Of course this assumption won’t be valid at the beginning of training. And even at the end of training we really only know something weaker like: “Neither debater thinks they would win by a significant expected margin in a length N debate.”
At the end of training, isn’t the idea that the first player is winning a lot, since the first player can choose the best answer?
To explicate my concerns:
Are agents really incentivized to justify their assertions?
Are those justifications incentivized to be honest?
In the cases where the justifications aren’t fully verifiable, does it really make sense for the humans to trust anything they say? In particular, given the likelihood that one of the agents is lying?
I recognize that you’re saying these are open questions, I’m just trying to highlight where I’m confused—particularly as these questions are bound up with the question of what judge strategies should look like. It seems like a lot of pieces need to come together in just the right way, and I’m not currently seeing how judge strategies can simultaneously accomplish everything they need to.
Ah, I wasn’t aware of that document! Very helpful. The section previous to the one you link to seems quite relevant to my overall concerns, pointing in the direction of “yeah, in practice human judges have a lot of trouble incentivising debaters to properly justify their claims and defend them from critiques”. The rest of the document also seems potentially relevant to my confusions.
However, as Vojta mentions, asking the debaters to provide answers simultaneously seems to alleviate my concern about the equilibrium only by exacerbating the problem of providing good feedback toward the end of training; particularly in a deep NN version where the two debaters are actually using the same NN, there needs to be some way to break the symmetry, preventing both players from selecting the same answer all the time.
The asymmetric version of that, where one player chooses first, has the problem I mentioned: we will tend to know that the second player is more likely lying. OTOH, if we attempted a more symmetric version, where the two player’s answers are somehow pushed apart without favoring one or the other of them, then both players are probably lying (since you have to push them both away from the best answer). So I don’t see a viable way of symmetrizing responses for free-choice questions.
I like Vanessa’s proposal of restricting to multiple-choice questions rather than free-response questions, and pre-assigning debaters to specific positions.
there needs to be some way to break the symmetry, preventing both players from selecting the same answer all the time.
You can just rejection sample—if both players give the same answer, just resample the answers / move on to a new question.
“Same answer” can be evaluated by a human, or by an automated model.
If rejection sampling is extremely inefficient (almost all answers are the same) then it seems like you’re probably done with training. But if you really wanted to continue, you can probably importance sample in order to ensure different answers, as long as you can evaluate the original probability of any given answer.
If rejection sampling samples new questions as well as new answers, then we still get no training signal on cases where debaters agree. This seems potentially concerning, since agreement isn’t a great signal that things have actually converged to good answers.
Part of the concern is that, since both debaters are actually the same NN, you get the same answer early, not only as you converge to good answers. It’s not clear that this can be solved just by adding randomness to the NNs, because the NNs might train to eliminate the influence of the noise fairly early—after all, the noise isn’t providing useful information for them.
Rejection sampling could dwell on the same question, only resampling answers. This seems like a better solution. But it might still imply a whole lot of resampling. Again, even if there isn’t convergence to a really good answer yet, the gradient might be away from randomness, because (a) the right answer is too rare, (b) the vast majority of mutated answers are far worse than the noise-free answer, (c) the AI isn’t yet any good at defending the right answer from attacks.
Symmetrizing for the judge, not the AIs.
Another potential solution is to keep the asymmetry of “one AI decides on an answer first; then the second AI selects an answer”—but hide who is who from the judge. Present the AIs symmetrically, giving away no information about who was “first”. This mostly solves my problem.
However, one might worry that AIs would find some way to argue about who was really first vs second. Perhaps there are some signs which may indicate that a particular answer was second—“Your answer is the sort of answer which someone would choose after seeing my answer, and trying to select something close”. If such cases could be persuasively made, then the judge should find it to be significant evidence.
This is especially plausible if the AIs are allowed to make psychological claims about each other, and perhaps even examine each other’s NN activations to back up such claims. In this case, AI #1 could show that AI #2 saw AI #1s answer initially, when it was selecting its own answer—thus breaking the symmetry between the agents. This is plausible, because we might want to do that as a transparency tool. However, it completely ruins the mechanism I’m proposing here, since the mechanism depends on hiding information from the judge.
This seems to basically address my concern. However, maybe binary questions are a bad idea. One critique I’ve been given (due to Misha Yagudin) is that apparently binary questions are never really so simple, due to interpretation. Too many complex questions end up having answers fitting categories like “it depends” and “both are wrong”. For one thing, this means you want free-response answers in order to get the really useful answers (which point out wrong or fuzzy assumptions in your question). For another thing, this means that if you do try to stick to binary questions, the debates will spiral into interpretational issues which are not so likely to be useful.
Answer different questions.
This proposal is due to Scott Garrabrant, and I think it resolves my issue completely. However, Scott had a very different motivation for the proposal, which I don’t expect to do justice to. I also expect the version I’ll write out may be a little simpler than Scott’s, missing some ideas which were important to him.
Why should the two debaters be forced to give different answers to the same question? Why not get them to answer two entirely different questions, and judge which of the two was more persuasive?
As you mentioned in the parent, it’s not really obvious in general whether two answers are “the same answer”. It’s also not obvious whether two answers are really mutually exclusive. The two answers might be in different ontologies, and very hard to compare. For example, a question about large-scale irrigation might get one answer in the ontology of engineering, which discusses questions of fluid mechanics and so on; and another in the ontology of economics, which discusses mechanism design, ethics of distribution of benefits of state-funded infrastructure, etc. It could be difficult to put these into common terms to decide whether they’re mutually exclusive. But maybe this is a fake difficulty; we don’t actually care whether they’re mutually exclusive, we just have to rank the two different sides for convincingness. This perspective allows us to sample entirely different questions for the two sides, eliminating the training-signal problem of overlapping answers.
The feb 2020 proposal for symmetrizing debate which you cited mandates that both sides develop their own argument, and then attack weak points in the other’s argument. So you can think of this as a 4-player game:
Assertion A and assertion B are sampled. These may be totally unrelated, so the risk of A=B is very low.
AD, the defense of A, is a player with the job of making a constructive argument for A.
AC, the critic for A, is a player with the job of critiquing the argument for A.
BD defends B.
BC critiques the defense of B.
AD and BC are on a team; BD and AC are on a team. The judge decides which of the teams did better.
Otherwise, the proposal is much the same as the feb 2020 proposal.
After training, how do we use this model to answer questions? After all, unlike the usual case, the AIs are not directly learning to give answers to questions. They are only learning to defend claims (and critique defenses of claims).
Scott’s proposal is that you then utilize the system by asking it to defend claims such as: “In order to build a rocket ship, I will follow the first suggestion you make in your defense of this claim.”, “You will convince me that the best answer to the question ‘what will the weather be like tomorrow’ is the first such answer you mention in your defense of this claim”, etc.
Scott’s solution is obviously a bit complicated compared to the usual debate setup, but his point was that the apparent simplicity of the usual setup is actually hiding something, because you don’t really get anything out of the assumption that the two players are answering the same question.
It seems like you’ve ignored the possibility of importance sampling?
More broadly if this ends up being a problem it’s basically an exploration problem that I expect we can solve with simple ML tricks. E.g. you could include an entropy bonus so that the agents are incentivized to say different things, and anneal that away as training progresses.
his point was that the apparent simplicity of the usual setup is actually hiding something, because you don’t really get anything out of the assumption that the two players are answering the same question.
Sure? I feel like the argument for safety is that you have two equally-matched players that are incentivized to find flaws in each other’s arguments, which is also true in Scott’s proposal. It doesn’t feel to me like that argument for safety depended much on them answering the same question.
(I feel like I’m restating what you said, I guess I’m confused why you interpret this as evidence that the simplicity of the setup is “hiding something”.)
It seems like you’ve ignored the possibility of importance sampling?
Ah, right, I agree. I forgot about that suggestion as I was writing. It seems likely some version of this would work.
(I feel like I’m restating what you said, I guess I’m confused why you interpret this as evidence that the simplicity of the setup is “hiding something”.)
Yep, sorry, I think you should take that as something-about-Scott’s-point-abram-didn’t-explain. I still disclaim myself as maybe missing part of Scott’s point. But: what the simpler setup is “hiding” is the complexity of comparing answers:
The complexity of determining whether two claims are “different”.
The complexity of determining whether two claims are mutually exclusive.
The complexity of comparing the quality of different arguments, when the different answers may be expressed in very different ontologies, and deal with very difficult-to-compare considerations.
Making the two sides defend entirely unrelated claims makes all this obvious. In addition, it makes the first two bullet points irrelevant, removing a “fake difficulty” from the setup.
Okay, that all makes sense. One maybe-caveat-or-disagreement:
The complexity of comparing the quality of different arguments, when the different answers may be expressed in very different ontologies, and deal with very difficult-to-compare considerations.
I do think that answering the same question does make it meaningfully easier to compare answers, though I agree it’s still not obvious that it’s easy on some absolute scale for the reasons you outline.
Even if you keep the argumentation phase asymmetric, you might want to make the answering phase simultaneous or at least allow the second AI to give the same answer as the first AI (which can mean a draw by default).
This doesn’t make for a very good training signal, but might have better equilibria.
Do you mean that every debater could have defended each of their statements s in a debate which lasted an additional N steps after s was made? What happens if some statements are challenged? And what exactly does it mean to defend statements from a challenge?
Yes. N is the remaining length of the debate. As discussed in the paper, when one player thinks that the other is making an indefensible claim then we zoom in on the subclaim and use the remaining time to resolve it.
I get the feeling you’re suggesting something similar to the high school debate rule (which I rejected but didn’t analyze very much), where unrefuted statements are assumed to be established (unless patently false), refutations are assumed decisive unless they themselves are refuted, etc.
There is a time/depth limit. A discussion between two people can end up with one answer that is unchallenged, or two proposals that everyone agrees can’t be resolved in the remaining time. If there are conflicting answers that debaters don’t expect to be able to resolve in the remaining time, the strength of inference will depend on how much time is remaining, and will mean nothing if there is no remaining time.
At the end of training, isn’t the idea that the first player is winning a lot, since the first player can choose the best answer?
I’m describing what you should infer about an issue that has come up where neither player wants to challenge the other’s stance.
Are agents really incentivized to justify their assertions?
Under the norms I proposed in the grandparent, if one player justifies and the other doesn’t (nor challenge the justification), the one who justifies will win. So it seems like they are incentivized to justify.
Are those justifications incentivized to be honest?
If they are dishonest then the other player has the opportunity to challenge them. So initially making a dishonest justification may be totally fine, but eventually the other player will learn to challenge and you will need to be honest in order to defend.
In the cases where the justifications aren’t fully verifiable, does it really make sense for the humans to trust anything they say? In particular, given the likelihood that one of the agents is lying?
It’s definitely an open question how much can be justified in a depth N debate.
I recognize that you’re saying these are open questions, I’m just trying to highlight where I’m confused—particularly as these questions are bound up with the question of what judge strategies should look like. It seems like a lot of pieces need to come together in just the right way, and I’m not currently seeing how judge strategies can simultaneously accomplish everything they need to.
It seems like the only ambiguity in the proposal in the grandparent is: “How much should you infer from the fact that a statement can be defended in a length T debate?” I agree that we need to answer this question to make the debate fully specified (of course we wanted to answer it anyway in order to use debate). My impression is that isn’t what you are confused about and that there’s a more basic communication problem.
In practice this doesn’t seem to be an important part of the difficulty in getting debates to work, for the reasons I sketched above—debaters are free what justifications they give, so a good debater at depth T+1 will give statements that can be justified at depth T (in the sense that a conflicting opinion with a different upshot couldn’t be defended at depth T), and the judge will basically ignore statements where conflicting positions can both be justified at depth T. It seems likely there is some way to revise the rules so that the judge instructions don’t have to depend on “assume that answer can be defended at depth T” but it doesn’t seem like a priority.
It seems like the only ambiguity in the proposal in the grandparent is: [...] My impression is that isn’t what you are confused about and that there’s a more basic communication problem.
Yeah. From my perspective, either I’m being dense and your proposed judge policy is perfectly clear, or you’re being dense about the fact that your proposal isn’t clear. My previous comments were mainly aimed at trying to get clear on what the proposal is (and secondarily, trying to clarify why I have concerns which would make the clarity important). Then your replies all seemed predicated on the assumption that the proposal in “the grandparent” (now the great-grandparent) was already clear.
All I got from the great-grandparent was a proposal for what happens if no debater contests any claims. It seems pretty explicit that you’re only handling that case:
If T=0, use your best guess after looking at what the debaters said.
If T=N+1 and no debater challenges any of their opponent’s statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.
You then make some further remarks which are not actually about the judging strategy, but rather, about the question of what inferences we’re justified to make upon observing a debate. For me this was moving too fast; I want to be clear on what the proposed strategy is first, and then reason about consequences.
Your most recent reply does make a few further remarks about what the strategy might be, but I’m not sure how to integrate them into a cohesive judging strategy. Could you try again to describe what the full judging strategy is, including how judges deal with debaters contesting each other’s statements?
A couple of other things I’m unclear on:
Do the debaters know how long the debate is going to be?
To what extent are you trying to claim some relationship between the judge strategy you’re describing and the honest one? EG, that it’s eventually close to honest judging? (I’m asking whether this seems like an important question for the discussion vs one which should be set aside.)
Sorry for not understanding how much context was missing here.
The right starting point for your question is this writeup which describes the state of debate experiments at OpenAI as of end-of-2019 including the rules we were using at that time. Those rules are a work in progress but I think they are good enough for the purpose of this discussion.
In those rules: If we are running a depth-T+1 debate about X and we encounter a disagreement about Y, then we start a depth-T debate about Y and judge exclusively based on that. We totally ignore the disagreement about X.
Our current rules—to hopefully be published sometime this quarter—handle recursion in a slightly more nuanced way. In the current rules, after debating Y we should return to the original debate. We allow the debaters to make a new set of arguments, and it may be that one debater now realizes they should concede, but it’s important that a debater who had previously made an untenable claim about X will eventually pay a penalty for doing so (in addition to whatever payoff they receive in the debate about Y). I don’t expect this paragraph to be clear and don’t think it’s worth getting into until we publish an update, but wanted to flag it.
Do the debaters know how long the debate is going to be?
Yes.
To what extent are you trying to claim some relationship between the judge strategy you’re describing and the honest one? EG, that it’s eventually close to honest judging? (I’m asking whether this seems like an important question for the discussion vs one which should be set aside.)
If debate works, then at equilibrium the judge will always be favoring the better answer. If furthermore the judge believes that debate works, then this will also be their honest belief. So if judges believe in debate then it looks to me like the judging strategy must eventually approximate honest judging. But this is downstream of debate working, it doesn’t play an important role in the argumetn that debate works or anything like that.
Yep, that document was what I needed to see. I wouldn’t say all my confusions are resolved, but I need to think more carefully about what’s in there. Thanks!
It seems the symmetry concerns of that document are quite different from the concerns I was voicing. The symmetry concerns in the document are, iiuc,
The debate goes well if the honest player expounds an argument, and the dishonest player critiques that argument. However, the debate goes poorly if those roles end up reversed. Therefore we force both players to do both.
OTOH, my symmetry concerns can be summarized as follows:
If player 2 chooses an answer after player 1 (getting access to player 1′s answer in order to select a different one), then assuming competent play, player 1′s answer will almost always be the better one. This prior taints the judge’s decision in a way which seems to seriously reduce the training signal and threaten the desired equilibrium.
If the two players choose simultaneously, then it’s hard to see how to discourage them from selecting the same answer. This seems likely at late stages due to convergence, and also likely at early stages due to the fact that both players actually use the same NN. This again seriously reduces the training signal.
I now believe that this concern can be addressed, although it seems a bit fiddly, and the mechanism which I currently believe addresses the problem is somewhat complex.
Known Debate Length
I’m a bit confused why you would make the debate length known to the debaters. This seems to allow them to make indefensible statements at the very end of a debate, secure in the knowledge that they can’t be critiqued. One step before the end, they can make statements which can’t be convincingly critiqued in one step. And so on.
Instead, it seems like you’d want the debate to end randomly, according to a memoryless distribution. This way, the expected future debate length is the same at all times, meaning that any statement made at any point is facing the same expected demand of defensibility.
Factored Cognition
I currently think all my concerns can be addressed if we abandon the link to factored cognition and defend a less ambitious thesis about debate. The feb 2020 proposal does touch on some of my concerns there, by enforcing a good argumentative structure, rather than allowing the debate to spiral out of control (due to e.g. delaying tactics).
However, my overall position is still one of skepticism wrt the link to factored cognition. The most salient reason for me ATM is the concern that debaters needn’t structure their arguments as DAGs which ground out in human-verifiable premises, but rather, can make large circular arguments (too large for the debate structure to catch) or unbounded argument chains (or simply very very high depth argument trees, which contain a flaw at a point far too deep for debate to find).
ETA: Having now read more of the feb 2020 report, I see that very similar concerns are expressed near the end—the long computation problem seems pretty similar to what I’m pointing at.
I’m a bit confused why you would make the debate length known to the debaters. This seems to allow them to make indefensible statements at the very end of a debate, secure in the knowledge that they can’t be critiqued. One step before the end, they can make statements which can’t be convincingly critiqued in one step. And so on.
[...]
The most salient reason for me ATM is the concern that debaters needn’t structure their arguments as DAGs which ground out in human-verifiable premises, but rather, can make large circular arguments (too large for the debate structure to catch) or unbounded argument chains (or simply very very high depth argument trees, which contain a flaw at a point far too deep for debate to find).
If I assert “X because Y & Z” and the depth limit is 0, you aren’t intended to say “Yup, checks out,” unless Y and Z and the implication are self-evident to you. Low-depth debates are supposed to ground out with the judge’s priors / low-confidence in things that aren’t easy to establish directly (because if I’m only updating on “Y looks plausible in a very low-depth debate” then I’m going to say “I don’t know but I suspect X” is a better answer than “definitely X”). That seems like a consequence of the norms in my original answer.
In this context, a circular argument just isn’t very appealing. At the bottom you are going to be very uncertain, and all that uncertainty is going to propagate all the way up.
Instead, it seems like you’d want the debate to end randomly, according to a memoryless distribution. This way, the expected future debate length is the same at all times, meaning that any statement made at any point is facing the same expected demand of defensibility.
If you do it this way the debate really doesn’t seem to work, as you point out.
For my part I mostly care about the ambitious thesis.
If the two players choose simultaneously, then it’s hard to see how to discourage them from selecting the same answer. This seems likely at late stages due to convergence, and also likely at early stages due to the fact that both players actually use the same NN. This again seriously reduces the training signal.
If player 2 chooses an answer after player 1 (getting access to player 1′s answer in order to select a different one), then assuming competent play, player 1′s answer will almost always be the better one. This prior taints the judge’s decision in a way which seems to seriously reduce the training signal and threaten the desired equilibrium.
I disagree with both of these as objections to the basic strategy, but don’t think they are very important.
Your debate comes with some time limit T.
If T=0, use your best guess after looking at what the debaters said.
If T=N+1 and no debater challenges any of their opponent’s statements, then give your best answer assuming that every debater could have defended each of their statements from a challenge in a length-N debate.
Of course this assumption won’t be valid at the beginning of training. And even at the end of training we really only know something weaker like: “Neither debater thinks they would win by a significant expected margin in a length N debate.”
What can you infer if you see answers A and B to a question and know that both of them are defensible (in expectation) in a depth-N debate? That’s basically the open research question, with the hope being that you inductively make stronger and stronger inferences for larger N.
(This is very similar to asking when iterated amplification produces a good answer, up to the ambiguity about how you sample questions in amplification.)
(When we actually give judges instructions for now we just tell them to assume that both debater’s answers are reasonable. If one debater gives arguments where the opposite claim would also be “reasonable,” and the other debater gives arguments that are simple enough to be conclusively supported with the available depth, then the more helpful debater usually wins. Overall I don’t think that precision about this is a bottleneck right now.)
Do you mean that every debater could have defended each of their statements s in a debate which lasted an additional N steps after s was made?
What happens if some statements are challenged? And what exactly does it mean to defend statements from a challenge? I get the feeling you’re suggesting something similar to the high school debate rule (which I rejected but didn’t analyze very much), where unrefuted statements are assumed to be established (unless patently false), refutations are assumed decisive unless they themselves are refuted, etc.
At the end of training, isn’t the idea that the first player is winning a lot, since the first player can choose the best answer?
To explicate my concerns:
Are agents really incentivized to justify their assertions?
Are those justifications incentivized to be honest?
In the cases where the justifications aren’t fully verifiable, does it really make sense for the humans to trust anything they say? In particular, given the likelihood that one of the agents is lying?
I recognize that you’re saying these are open questions, I’m just trying to highlight where I’m confused—particularly as these questions are bound up with the question of what judge strategies should look like. It seems like a lot of pieces need to come together in just the right way, and I’m not currently seeing how judge strategies can simultaneously accomplish everything they need to.
You can and probably should symmetrize the game (see here).
Ah, I wasn’t aware of that document! Very helpful. The section previous to the one you link to seems quite relevant to my overall concerns, pointing in the direction of “yeah, in practice human judges have a lot of trouble incentivising debaters to properly justify their claims and defend them from critiques”. The rest of the document also seems potentially relevant to my confusions.
However, as Vojta mentions, asking the debaters to provide answers simultaneously seems to alleviate my concern about the equilibrium only by exacerbating the problem of providing good feedback toward the end of training; particularly in a deep NN version where the two debaters are actually using the same NN, there needs to be some way to break the symmetry, preventing both players from selecting the same answer all the time.
The asymmetric version of that, where one player chooses first, has the problem I mentioned: we will tend to know that the second player is more likely lying. OTOH, if we attempted a more symmetric version, where the two player’s answers are somehow pushed apart without favoring one or the other of them, then both players are probably lying (since you have to push them both away from the best answer). So I don’t see a viable way of symmetrizing responses for free-choice questions.
I like Vanessa’s proposal of restricting to multiple-choice questions rather than free-response questions, and pre-assigning debaters to specific positions.
You can just rejection sample—if both players give the same answer, just resample the answers / move on to a new question.
“Same answer” can be evaluated by a human, or by an automated model.
If rejection sampling is extremely inefficient (almost all answers are the same) then it seems like you’re probably done with training. But if you really wanted to continue, you can probably importance sample in order to ensure different answers, as long as you can evaluate the original probability of any given answer.
Resampling.
If rejection sampling samples new questions as well as new answers, then we still get no training signal on cases where debaters agree. This seems potentially concerning, since agreement isn’t a great signal that things have actually converged to good answers.
Part of the concern is that, since both debaters are actually the same NN, you get the same answer early, not only as you converge to good answers. It’s not clear that this can be solved just by adding randomness to the NNs, because the NNs might train to eliminate the influence of the noise fairly early—after all, the noise isn’t providing useful information for them.
Rejection sampling could dwell on the same question, only resampling answers. This seems like a better solution. But it might still imply a whole lot of resampling. Again, even if there isn’t convergence to a really good answer yet, the gradient might be away from randomness, because (a) the right answer is too rare, (b) the vast majority of mutated answers are far worse than the noise-free answer, (c) the AI isn’t yet any good at defending the right answer from attacks.
Symmetrizing for the judge, not the AIs.
Another potential solution is to keep the asymmetry of “one AI decides on an answer first; then the second AI selects an answer”—but hide who is who from the judge. Present the AIs symmetrically, giving away no information about who was “first”. This mostly solves my problem.
However, one might worry that AIs would find some way to argue about who was really first vs second. Perhaps there are some signs which may indicate that a particular answer was second—“Your answer is the sort of answer which someone would choose after seeing my answer, and trying to select something close”. If such cases could be persuasively made, then the judge should find it to be significant evidence.
This is especially plausible if the AIs are allowed to make psychological claims about each other, and perhaps even examine each other’s NN activations to back up such claims. In this case, AI #1 could show that AI #2 saw AI #1s answer initially, when it was selecting its own answer—thus breaking the symmetry between the agents. This is plausible, because we might want to do that as a transparency tool. However, it completely ruins the mechanism I’m proposing here, since the mechanism depends on hiding information from the judge.
Use binary questions, and assign positions rather than allowing free-response answers.
This seems to basically address my concern. However, maybe binary questions are a bad idea. One critique I’ve been given (due to Misha Yagudin) is that apparently binary questions are never really so simple, due to interpretation. Too many complex questions end up having answers fitting categories like “it depends” and “both are wrong”. For one thing, this means you want free-response answers in order to get the really useful answers (which point out wrong or fuzzy assumptions in your question). For another thing, this means that if you do try to stick to binary questions, the debates will spiral into interpretational issues which are not so likely to be useful.
Answer different questions.
This proposal is due to Scott Garrabrant, and I think it resolves my issue completely. However, Scott had a very different motivation for the proposal, which I don’t expect to do justice to. I also expect the version I’ll write out may be a little simpler than Scott’s, missing some ideas which were important to him.
Why should the two debaters be forced to give different answers to the same question? Why not get them to answer two entirely different questions, and judge which of the two was more persuasive?
As you mentioned in the parent, it’s not really obvious in general whether two answers are “the same answer”. It’s also not obvious whether two answers are really mutually exclusive. The two answers might be in different ontologies, and very hard to compare. For example, a question about large-scale irrigation might get one answer in the ontology of engineering, which discusses questions of fluid mechanics and so on; and another in the ontology of economics, which discusses mechanism design, ethics of distribution of benefits of state-funded infrastructure, etc. It could be difficult to put these into common terms to decide whether they’re mutually exclusive. But maybe this is a fake difficulty; we don’t actually care whether they’re mutually exclusive, we just have to rank the two different sides for convincingness. This perspective allows us to sample entirely different questions for the two sides, eliminating the training-signal problem of overlapping answers.
The feb 2020 proposal for symmetrizing debate which you cited mandates that both sides develop their own argument, and then attack weak points in the other’s argument. So you can think of this as a 4-player game:
Assertion A and assertion B are sampled. These may be totally unrelated, so the risk of A=B is very low.
AD, the defense of A, is a player with the job of making a constructive argument for A.
AC, the critic for A, is a player with the job of critiquing the argument for A.
BD defends B.
BC critiques the defense of B.
AD and BC are on a team; BD and AC are on a team. The judge decides which of the teams did better.
Otherwise, the proposal is much the same as the feb 2020 proposal.
After training, how do we use this model to answer questions? After all, unlike the usual case, the AIs are not directly learning to give answers to questions. They are only learning to defend claims (and critique defenses of claims).
Scott’s proposal is that you then utilize the system by asking it to defend claims such as: “In order to build a rocket ship, I will follow the first suggestion you make in your defense of this claim.”, “You will convince me that the best answer to the question ‘what will the weather be like tomorrow’ is the first such answer you mention in your defense of this claim”, etc.
Scott’s solution is obviously a bit complicated compared to the usual debate setup, but his point was that the apparent simplicity of the usual setup is actually hiding something, because you don’t really get anything out of the assumption that the two players are answering the same question.
It seems like you’ve ignored the possibility of importance sampling?
More broadly if this ends up being a problem it’s basically an exploration problem that I expect we can solve with simple ML tricks. E.g. you could include an entropy bonus so that the agents are incentivized to say different things, and anneal that away as training progresses.
Sure? I feel like the argument for safety is that you have two equally-matched players that are incentivized to find flaws in each other’s arguments, which is also true in Scott’s proposal. It doesn’t feel to me like that argument for safety depended much on them answering the same question.
(I feel like I’m restating what you said, I guess I’m confused why you interpret this as evidence that the simplicity of the setup is “hiding something”.)
Ah, right, I agree. I forgot about that suggestion as I was writing. It seems likely some version of this would work.
Yep, sorry, I think you should take that as something-about-Scott’s-point-abram-didn’t-explain. I still disclaim myself as maybe missing part of Scott’s point. But: what the simpler setup is “hiding” is the complexity of comparing answers:
The complexity of determining whether two claims are “different”.
The complexity of determining whether two claims are mutually exclusive.
The complexity of comparing the quality of different arguments, when the different answers may be expressed in very different ontologies, and deal with very difficult-to-compare considerations.
Making the two sides defend entirely unrelated claims makes all this obvious. In addition, it makes the first two bullet points irrelevant, removing a “fake difficulty” from the setup.
Okay, that all makes sense. One maybe-caveat-or-disagreement:
I do think that answering the same question does make it meaningfully easier to compare answers, though I agree it’s still not obvious that it’s easy on some absolute scale for the reasons you outline.
Even if you keep the argumentation phase asymmetric, you might want to make the answering phase simultaneous or at least allow the second AI to give the same answer as the first AI (which can mean a draw by default).
This doesn’t make for a very good training signal, but might have better equilibria.
Responded to this in my reply to Abram’s comment.
Yes. N is the remaining length of the debate. As discussed in the paper, when one player thinks that the other is making an indefensible claim then we zoom in on the subclaim and use the remaining time to resolve it.
There is a time/depth limit. A discussion between two people can end up with one answer that is unchallenged, or two proposals that everyone agrees can’t be resolved in the remaining time. If there are conflicting answers that debaters don’t expect to be able to resolve in the remaining time, the strength of inference will depend on how much time is remaining, and will mean nothing if there is no remaining time.
I’m describing what you should infer about an issue that has come up where neither player wants to challenge the other’s stance.
Under the norms I proposed in the grandparent, if one player justifies and the other doesn’t (nor challenge the justification), the one who justifies will win. So it seems like they are incentivized to justify.
If they are dishonest then the other player has the opportunity to challenge them. So initially making a dishonest justification may be totally fine, but eventually the other player will learn to challenge and you will need to be honest in order to defend.
It’s definitely an open question how much can be justified in a depth N debate.
It seems like the only ambiguity in the proposal in the grandparent is: “How much should you infer from the fact that a statement can be defended in a length T debate?” I agree that we need to answer this question to make the debate fully specified (of course we wanted to answer it anyway in order to use debate). My impression is that isn’t what you are confused about and that there’s a more basic communication problem.
In practice this doesn’t seem to be an important part of the difficulty in getting debates to work, for the reasons I sketched above—debaters are free what justifications they give, so a good debater at depth T+1 will give statements that can be justified at depth T (in the sense that a conflicting opinion with a different upshot couldn’t be defended at depth T), and the judge will basically ignore statements where conflicting positions can both be justified at depth T. It seems likely there is some way to revise the rules so that the judge instructions don’t have to depend on “assume that answer can be defended at depth T” but it doesn’t seem like a priority.
Yeah. From my perspective, either I’m being dense and your proposed judge policy is perfectly clear, or you’re being dense about the fact that your proposal isn’t clear. My previous comments were mainly aimed at trying to get clear on what the proposal is (and secondarily, trying to clarify why I have concerns which would make the clarity important). Then your replies all seemed predicated on the assumption that the proposal in “the grandparent” (now the great-grandparent) was already clear.
All I got from the great-grandparent was a proposal for what happens if no debater contests any claims. It seems pretty explicit that you’re only handling that case:
You then make some further remarks which are not actually about the judging strategy, but rather, about the question of what inferences we’re justified to make upon observing a debate. For me this was moving too fast; I want to be clear on what the proposed strategy is first, and then reason about consequences.
Your most recent reply does make a few further remarks about what the strategy might be, but I’m not sure how to integrate them into a cohesive judging strategy. Could you try again to describe what the full judging strategy is, including how judges deal with debaters contesting each other’s statements?
A couple of other things I’m unclear on:
Do the debaters know how long the debate is going to be?
To what extent are you trying to claim some relationship between the judge strategy you’re describing and the honest one? EG, that it’s eventually close to honest judging? (I’m asking whether this seems like an important question for the discussion vs one which should be set aside.)
Sorry for not understanding how much context was missing here.
The right starting point for your question is this writeup which describes the state of debate experiments at OpenAI as of end-of-2019 including the rules we were using at that time. Those rules are a work in progress but I think they are good enough for the purpose of this discussion.
In those rules: If we are running a depth-T+1 debate about X and we encounter a disagreement about Y, then we start a depth-T debate about Y and judge exclusively based on that. We totally ignore the disagreement about X.
Our current rules—to hopefully be published sometime this quarter—handle recursion in a slightly more nuanced way. In the current rules, after debating Y we should return to the original debate. We allow the debaters to make a new set of arguments, and it may be that one debater now realizes they should concede, but it’s important that a debater who had previously made an untenable claim about X will eventually pay a penalty for doing so (in addition to whatever payoff they receive in the debate about Y). I don’t expect this paragraph to be clear and don’t think it’s worth getting into until we publish an update, but wanted to flag it.
Yes.
If debate works, then at equilibrium the judge will always be favoring the better answer. If furthermore the judge believes that debate works, then this will also be their honest belief. So if judges believe in debate then it looks to me like the judging strategy must eventually approximate honest judging. But this is downstream of debate working, it doesn’t play an important role in the argumetn that debate works or anything like that.
Yep, that document was what I needed to see. I wouldn’t say all my confusions are resolved, but I need to think more carefully about what’s in there. Thanks!
Symmetry Concerns
It seems the symmetry concerns of that document are quite different from the concerns I was voicing. The symmetry concerns in the document are, iiuc,
The debate goes well if the honest player expounds an argument, and the dishonest player critiques that argument. However, the debate goes poorly if those roles end up reversed. Therefore we force both players to do both.
OTOH, my symmetry concerns can be summarized as follows:
If player 2 chooses an answer after player 1 (getting access to player 1′s answer in order to select a different one), then assuming competent play, player 1′s answer will almost always be the better one. This prior taints the judge’s decision in a way which seems to seriously reduce the training signal and threaten the desired equilibrium.
If the two players choose simultaneously, then it’s hard to see how to discourage them from selecting the same answer. This seems likely at late stages due to convergence, and also likely at early stages due to the fact that both players actually use the same NN. This again seriously reduces the training signal.
I now believe that this concern can be addressed, although it seems a bit fiddly, and the mechanism which I currently believe addresses the problem is somewhat complex.
Known Debate Length
I’m a bit confused why you would make the debate length known to the debaters. This seems to allow them to make indefensible statements at the very end of a debate, secure in the knowledge that they can’t be critiqued. One step before the end, they can make statements which can’t be convincingly critiqued in one step. And so on.
Instead, it seems like you’d want the debate to end randomly, according to a memoryless distribution. This way, the expected future debate length is the same at all times, meaning that any statement made at any point is facing the same expected demand of defensibility.
Factored Cognition
I currently think all my concerns can be addressed if we abandon the link to factored cognition and defend a less ambitious thesis about debate. The feb 2020 proposal does touch on some of my concerns there, by enforcing a good argumentative structure, rather than allowing the debate to spiral out of control (due to e.g. delaying tactics).
However, my overall position is still one of skepticism wrt the link to factored cognition. The most salient reason for me ATM is the concern that debaters needn’t structure their arguments as DAGs which ground out in human-verifiable premises, but rather, can make large circular arguments (too large for the debate structure to catch) or unbounded argument chains (or simply very very high depth argument trees, which contain a flaw at a point far too deep for debate to find).
ETA: Having now read more of the feb 2020 report, I see that very similar concerns are expressed near the end—the long computation problem seems pretty similar to what I’m pointing at.
If I assert “X because Y & Z” and the depth limit is 0, you aren’t intended to say “Yup, checks out,” unless Y and Z and the implication are self-evident to you. Low-depth debates are supposed to ground out with the judge’s priors / low-confidence in things that aren’t easy to establish directly (because if I’m only updating on “Y looks plausible in a very low-depth debate” then I’m going to say “I don’t know but I suspect X” is a better answer than “definitely X”). That seems like a consequence of the norms in my original answer.
In this context, a circular argument just isn’t very appealing. At the bottom you are going to be very uncertain, and all that uncertainty is going to propagate all the way up.
If you do it this way the debate really doesn’t seem to work, as you point out.
For my part I mostly care about the ambitious thesis.
I disagree with both of these as objections to the basic strategy, but don’t think they are very important.