Zac Kenton—Senior Research Scientist in the AGI Safety & Alignment team at Google DeepMind. Formerly at MILA, FHI, OATML (Oxford).
zac_kenton
The post has now been edited with the updated plots for open consultancy/debate.
Thanks for the comment Fabien. A couple of points:
open debate accuracy is (almost, except for the way we handle invalid answers, which is very rare) the same as debate accuracy. That’s because the data is almost exactly the same—we’re just marking one debater as a protagonist based on what that model would choose under direct QA. So it’s not bad that open debate has same accuracy as debate, that was expected. It is kinda bad that it’s somewhat worse than open consultancy, though we didn’t try ‘fully open debate’ where debaters can both pick same side (or opposite, perhaps under resampling/rephrasing etc). This is probably a better comparison to open consultancy.
your points about open consultancy, which I roughly understand as ‘weak judge would score higher if they just trusted the consultant’ is a good point, and has made us double check our filtering code, and I think we do have a bug there (accidentally used the weak judge’s model under direct QA to select the consultant’s answer, should have used consultant’s model, and similarly for debate). Fixing that bug brings the open consultancy accuracies for weak judges up to roughly in line with direct QA accuracy of strong consultant’s model (so it is better than open debate), and slightly increases protagonist winrate (without affecting debate accuracy).
Thanks so much for this—it prompted us to look for bugs! We will update the arxiv and add an edit on this blogpost.
On the footnote—sorry for confusion, but we do still think it’s meaningful to take the answer as what the judge gives (as we’re interested in the kind of feedback the consultant would get for training, rather than just how the consultant is performing, for which consultant accuracy is more appropriate).
And yes, I am interested in the versions of these protocols that incentivise arguing for the side ‘you believe to be true’/‘is most convincing’ and seeing how that affects the judge. We’re aiming for these set ups in the next project (e.g. ‘fully open debate’ above).
On scalable oversight with weak LLMs judging strong LLMs
Discussion: Challenges with Unsupervised LLM Knowledge Discovery
Thanks for the comment Michael. Firstly, just wanted to clarify the framing of this literature review—when considering strengths and weaknesses of each threat model, this was done in light of what we were aiming to do: generate and prioritise alignment research projects—rather than as an all-things-considered direct critique of each work (I think that is best done by commenting directly on those articles etc). I’ll add a clarification of that at the top. Now to your comments:
To your 1st point: I think the lack of specific assumptions about the AGI development model is both a strength and a weakness. Regarding the weakness, we mention it because it makes it harder to generate and prioritize research projects. It could be more helpful to say more explicitly, or earlier in the article what kind of systems you’re considering, perhaps pointing to the closest current prosaic system, or explaining why current systems are nothing like what you imagine the AGI development model is like.
On your 2nd point: What I meant was more “what about goal misgeneralization? Wouldn’t that mean the agent is likely to not be wireheading, and pursuing some other goal instead?”—you hint at this at the end of the section on supervised learning but that was in the context of whether a supervised learner would develop a misgeneralized long-term goal, and settled on being agnostic there.
On your 3rd point: It could have been interesting to read arguments for why would it need all available energy to secure its computer, rather than satisficing at some level. Or some detail on the steps for how it builds the technology to gather the energy, or how it would convert that into defence.
Threat Model Literature Review
Clarifying AI X-risk
I haven’t considered this in great detail, but if there are variables, then I think the causal discovery runtime is . As we mention in the paper (footnote 5) there may be more efficient causal discovery algorithms that make use of certain assumptions about the system.
On adoption, perhaps if one encounters a situation where the computational cost is too high, one could coarse-grain their variables to reduce the number of variables. I don’t have results on this at the moment but I expect that the presence of agency (none, or some) is robust to the coarse-graining, though the exact number of agents is not (example 4.3), nor are the variables identified as decisions/utilities (Appendix C).
Thanks, this has now been corrected to say ‘not terminal’.
Thanks for featuring our work! I’d like to clarify a few points, which I think each share some top-level similarities: our study is study of protocols as inference-only (which is cheap and quick to study, possibly indicative) whereas what we care more about it protocols for training (which is much more expensive, and will take longer to study) which was out of scope for this work, though we intend to look at that next based on our findings—e.g. we have learnt that some domains are easier to work with than others, some baseline protocols are more meaningful/easier to interpret. In my opinion this is time well-spent to avoid spending lots more money and time on rushing into finetuning but with a bad setup.
I haven’t carefully thought through these estimates (especially the use of an article, which to me seems to depend largely on the article length), but it looks like you’re considering the inference costs. In the eventual use-case of using scalable oversight for training/finetuning, the cost of training is amortised. Typical usage would then be sample once from the finetuned model (as the hope is that the training incentives initial response eg for truthfulness. You could play out the whole debate if you want to at deployment,, e.g. for monitoring, but not necessary in general). It would be more appropriate to calculate finetune costs, as we don’t think there is much advantage to using these as inference procedures. We’ll be in a better position to estimate that in the next project.
Actually, in theory at least, one should be able to do better even without models being explicitly misaligned/deceptive (that is the hope of debate over other protocols like consultancy, after all). We think our work is interesting because it provides some mixed results on how that works in a particular empirical setup, though clearly limited by inference-only.
This is probably too strong a claim—we’ve tried to highlight our results are relatively mixed on the outcomes of the protocols, and are limited by being inference-only.